'archive.9901' -- lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... Somewhat belated because I took a couple of weeks vacation.... > > The RD would translate this into the appropriate drawings, > > specifications, etc. for fabrication. I assume that when you produce RTL you want the > > translator to create it from the OOA. I would regard the RTL as something a transform > > does from a higher level abstraction and the translator would simply package that output > > with the other documents. > > It sounds nice and simple, doesn't it. > > But what deos the translator do? While its reasonably straight forward to say that > every attribute is a register; and actions are combinatorial logic: such a view is > far too simplistic. Firstly, you need to think about creation and deletion of > objects. Do you assign a register to a specific instance of an attribute; or do > you maintain a pool of registers to be assigned to a limited number of objects? > Then you need to think about events. Can you compress an event into combinatorial > logic, or does it need a register? What happens if 2 events are generated to an > instance simultaneously? How many event queues, if any? How big does each event > queue need to be? Is a transform so complex that it needs multiple clock cycles? > If so, to you use a multicycle path, or do you add registers? If the latter, do > you pipeline state actions? Or is the pipeline orthogonal? Though it is pure speculation because I have never attempted HW design, I still think that this seems like an issue of the level of abstraction and the type of abstractions used. We could probably go around about this for awhile but without a concrete example to model it would be wheel spinning, and a useful concrete example would be too complicated for EMail text. Lacking such an example, any simple proposal can immediately be countered with a contrary case where it won't work. For example, if I am modeling at a level of abstraction where events are effectively signals, then event synchronization becomes trivial, transforms are limited to a single cycle, and I am going to have to come up with some clever abstractions (e.g., an ordered set of Signal instances) to handle sequential logic. However, this probably doesn't work real well for pipelines. So maybe the pipeline gets modeled in another domain at a lower level of abstraction. Or maybe I need a better way of modeling time sequences (e.g., a succession of self-directed states within a Pipeline instance). My point is that I can think of plausible mechanisms to deal with each of your issues -- what is moot is whether I can package a set of those mechanisms into a consistent solution for a particular, complex hardware design. > When you sit down and analyse the design decisions made by an ASIC designer, > you find that its very complex (not really suprising). RD tells us to construct > an adequate model of the design process; I don't like to prejudge the results; > however, I think that OOA-to-RTL is a bit too far to jump in one step. OTOH, it seems to me that you are using the the translation rules to capture design issues (or you are moving translation rules into the OOA meta model). If one is attacking a hardware design as the subject matter of an application, then the abstract, logical solution should be complete in the OOA alone. What I am arguing is that if one cannot describe an arbitrary hardware design as a complete, abstract, logical solution in an OOA, then S-M is probably not a good HW design tool. It might be possible to tweak the OOA meta model in a manner that would allow such a description, but that opens a debate on whether one wants to do this if there are already CAD/CAE tools designed to do it. I don't think it would be appropriate to bury critical parts of the HW design in the translation rules. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Finn... Sorry I'm tardy -- I was out on vacation.... > > > The order in which external events are processed is not changed by > > > warping. > > > True, but the order of external and internal events _is_ changed. > > No, this is never true in my architecture. But the order in which > internal events are generated is changed, which is really the point. Let's suppose the order that events have been received and are sitting in the queues when warp is engaged are: A1 (internal) B1 (external) A2 (internal) W1 (event to start warp processing) W2 (event generated during warp processing) If there were no warp these would be popped and executed as A1, B1, A2, W1, W2. However, when you introduce the warp deferral of the external queue the order of processing will be A1, A2, W1, W2, B1 with warp. Therefore you have modified the relative ordering of internal and external events. This can cause problems if self-directed events are involved (see below). > > It is fair for an OOA to > > rely upon a problem space constraint that orders the external events. Since self-directed > > events events take precedence, the OOA might use them to set a state machine to the proper > > state to accept the second event. > > Self-directed events are always internal events, so they're always > processed before the next external event is even looked at. True. But let's look at another queue: B1 (external event) W1 (event to kick off warp processing) A2 (internal event generated by warp processing) Suppose B1 is directed to instance A and when that action is processed, event A1 would be generated. Without warp processing the queue would look like B1 (external event) W1 (event to kick off warp processing) A1 (internal event resulting from B1) A2 (internal event generated by warp processing) But with warp processing we have W1, A2, B1, A1 as the execution sequence. If A1 and A2 are self-directed, then this violates the rules because the order of the self directed A1/A2 have been reversed. [It would be incorrect even if they were not self-directed, so long as they came from the same source and were targeted to the same destination.] One can argue that this is not a problem since there is no guarantee that B1 will be processed in any particular order, even if it is directed at the A1/A2 instance because it comes from a different source. My issue is that this is a stretch if the architecture would have processed B1 first if an event counter were used rather than warp -- the model should execute A1/A2 the same way in both situations. > > Suppose some action in the initiated processing generates two events. > > I'll assume you mean these events are generated during the warp. > > > One is the event that > > announces it has completed the processing the originator wanted. > > The originator does not require (and must not get sent) such an > event. BTW, states concerned with maintaining the counter in the > originator are also unnecessary. I confused things by mixing the warp and non-warp cases. In the non-warp case an event is generated back to the originator that is counted to see when the loop is done. I am assuming that this is done in the same action with the generation of an event for other processing. You would replace that return event to the originator with the warp termination. My point is that the warp doesn't terminate because there is still an unrelated event on the queue from the action that should have terminated the processing. > > The other event starts a > > whole new chain of unrelated (to the originator) processing. > > The new chain must be related since it was started during the warp. Ah, this is where we disagree. I agree that most iterations begin and end at the same instance and that instance is usually some sort of controller for a grander sequence. In this case the terminating action does not generate the next event, the originator does after all the return events are counted or warp returns. However, it is not uncommon to have to wait for only a subset of processing. In that case warp needs to terminate when that subset is complete, but another event to complete the overall processing is generated in the terminating action. As a trivial example, let's say an instance of object A wants to delete itself after sending off an event to each of several instances of object B. However, the B instances need to access an attribute from the A instance. Therefore the A instance can't delete itself until all the B instances have accessed the data, so it needs to wait for that. [Why not include the data on the event? Because Dave Whipp doesn't like data on events. Actually, I am just keeping things simple; a self-delete is a very clear example of the principle.] However, once the B instances access the data they will merrily continue processing by placing other events on the queue. Those new events initiate chains of processing that have nothing to do with the constraint on deleting the A instance. > > That unrelated processing will > > continue to throw events on the queue preventing the exit from warp even after all the > > processing that the originator was really interested in was done. > > No matter how many internal events are created during the warp the > internal event queue will always empty eventually. This is the only > condition required for ending the warp. True, but that may be the termination of application execution. In most situations I can think of that is probably incorrect behavior. If you recall, the reason you suggested the warp was to terminate a loop _to allow an event to be generated_ after a subset of processing was completed. If processing is going to proceed unimpeded until application termination without doing anything, why would one have needed to count events or warp to begin with? The empty-queue return can be a curse as well. What if the warp processing has a wait state for an external event that must be received before processing is complete? The warp will return prematurely. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- My recent dialog with Whipp and another on OTUG has prompted me to reconsider what is in a translation. To get us out of the Holiday Doldrums, I will toss these ruminations out here. A basic tenet of the methodology is that the OOA represents a complete, abstract representation of the logical solution that is independent of the implementation environment. This is translated into a specific implementation through Recursive Design. So far, so good. To do RD one needs three things: an OOA, an Architecture, and a suite of translation rules. For this discussion I am using the term Architecture loosely to include both realized implementation artifacts (e.g., libraries, compilers, etc.) and the infrastructure for mapping these against OOA constructs (e.g., templates, an OOA of implementation artifacts, or whatever). This brings me to the first thing that may be controversial. It has always been my understanding that the Architecture for a particular implementation environment is independent of the applications being translated. That is, the Architecture should not have special processing for particular applications -- the translation rules make all the choices. While the Architecture and the OOA are independent of each other, the translation rules are dependent upon both, just like a bridge between domains. The thing that is troubling me now is the degree of dependence. To stretch the bridge analogy, the wormhole paper postulates only a syntactic shift across the bridge while Whipp, I, and others speculated that a semantic shift was also necessary. Is there a similar issue of dichotomy of dependence for translation? A simplistic view of translation would have it do two things: provide a fixed infrastructure and map specific OOA constructs into particular implementation constructs. The fixed infrastructure handles things like data store accesses for the given database, maintaining referential and data integrity, and event management appropriate for the selected view of time and the nature of the environment. I have always assumed that this infrastructure is also application independent -- at least to the extent that one can say, "this entire application is of type X". In a very simplistic view of translation the mapping can also be fixed, in which case every application is translated the same way. This is generally undesirable for performance reasons. It is useful to be able to use problem space information to map particular OOA constructs into particular implementation constructs. For example, if the R1 and R2 relationships are both 1:M, it may be better to use a linked list of instances for one and embedded pointers for the other, depending upon maximum elements in the set, frequency of navigation, direction of navigation, etc. This sort of mapping has traditionally been handled by Colorization where somebody familiar with the problem space colors the OOA constructs to indicate to the translation rules what mappings to use. My first question is: is colorization the Analyst's job or the Architect's job? I have assumed that it is the Analyst's job because it requires problem space knowledge. However, the analyst typically needs to know the details of the Architecture to do this (e.g., the analyst selects a particular template). I have always had a problem with this architecture knowledge that the Analyst must have. I would much prefer to have the Architect identify the problem space facts needed to make the decisions for a particular Architecture and have the Analyst simply provide those facts without knowledge of how they are used. The Architect would then provide translation rules that used those facts the Do the Right Thing. This brings me to my second question: is it possible to simply provide predefined problem space facts (perhaps tagged to OOA constructs) for a particular application and still get a high quality translation? And a related question: is Colorization necessary to resolve any issues other than performance? I have always thought not because the OOA is supposed to be a complete logical solution to the problem, so all that is left to resolve are implementation tradeoffs, such as space/time. Finally, who does the domain bridges? Most people regard bridges as a translation issue but the Analyst almost always builds them. If the Architect is responsible for building them, this seems odd to me because now the Architect must do something manually for each application. I have always regarded Architect's role as performing a service akin to compiler writers -- what they do is done once for all applications in that environment. Moreover, bridges often require a knowledge of the application semantics (though one could get around this by having the Analyst write a specification in which case the Architect's role is reduced to being a coder). -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Carolyn Duby writes to shlaer-mellor-users: -------------------------------------------------------------------- At 09:07 AM 1/8/99 -0500, you wrote: >lahman writes to shlaer-mellor-users: >-------------------------------------------------------------------- >My first question is: is colorization the Analyst's job or >the Architect's job? I have assumed that it is the >Analyst's job because it requires problem space knowledge. >However, the analyst typically needs to know the details of >the Architecture to do this (e.g., the analyst selects a >particular template). I think the architecture should do as much as it can without requiring colorization. Colorization should only be used in the case where the translation templates cannot deduce the correct translation from the models. In a utopian system, no colorizations would be required. However, I have never encountered such a utopian system in my experience. >Finally, who does the domain bridges? Most people regard >bridges as a translation issue but the Analyst almost always >builds them. If the Architect is responsible for building >them, this seems odd to me because now the Architect must do >something manually for each application. I have always >regarded Architect's role as performing a service akin to >compiler writers -- what they do is done once for all >applications in that environment. Moreover, bridges often >require a knowledge of the application semantics (though one >could get around this by having the Analyst write a >specification in which case the Architect's role is reduced >to being a coder). IMHO, a bridge is an important analysis artifact. The analyst is uniquely qualified to specify the details of how the domains fit together. ________________________________________________ Pathfinder Solutions Inc. www.pathfindersol.com 888-OOA-PATH effective solutions for software engineering challenges Carolyn Duby voice: +01 508-384-1392 carolynd@pathfindersol.com fax: +01 508-384-7906 ________________________________________________ "Lynch, Chris D. SDX" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lahman: > This brings me to the first thing >that may be controversial. It has always been my >understanding that the Architecture for a particular >implementation environment is independent of the >applications being translated. That is, the Architecture >should not have special processing for particular >applications -- the translation rules make all the choices. I would not say that such universality is a part of the very definition of the term "architecture". This is in line with my preference for targeted, conceptually clean, limited-purpose architectures. While such universality is in theory possible to achieve, it is another thing to actually build it in a way so as to work for all the applications it is asked to support. >While the Architecture and the OOA are independent of each >other, the translation rules are dependent upon both, just >like a bridge between domains. I was taught that Architecture received requirements from the OOA across an OOA-to-architecture bridge. So, more than being like a bridge, it *is* a bridge-- a requirements dependency of a lower domain upon a higher. >My first question is: is colorization the Analyst's job or >the Architect's job? I have assumed that it is the >Analyst's job because it requires problem space knowledge. >However, the analyst typically needs to know the details of >the Architecture to do this (e.g., the analyst selects a >particular template). >I have always had a problem with this architecture knowledge >that the Analyst must have. I would much prefer to have the >Architect identify the problem space facts needed to make >the decisions for a particular Architecture and have the >Analyst simply provide those facts without knowledge of how >they are used. The Architect would then provide translation >rules that used those facts the Do the Right Thing. I agree: I think it's primarily the architect's responsibility to "color", but he might need help from the analyst in achieving some architectural requirements (see my thoughts in the next section.) >And a related question: is Colorization necessary to resolve >any issues other than performance? I have always thought >not because the OOA is supposed to be a complete logical >solution to the problem, so all that is left to resolve are >implementation tradeoffs, such as space/time. My understanding is that the OOA proper does not specify the complete solution. E.g., * achieving a particular execution timing * object persistence * recovery from faults in the computer hardware * handling exceptions generated by the hardware, state-machine engine, database, etc. * identification of data which is to be protected by checksum, CRC, etc. and how often it is to be checked >Moreover, bridges often >require a knowledge of the application semantics (though one >could get around this by having the Analyst write a >specification in which case the Architect's role is reduced >to being a coder). I think "knowledge of the application semantics" is made unnecessary by the service domain's requirements definition. But maybe I'm missing something...? -Chris ------------------------------------------- Chris Lynch Abbott Ambulatory Infusion Systems San Diego, Ca LYNCHCD@HPD.ABBOTT.COM ------------------------------------------- lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Duby... > I think the architecture should do as much as it can without requiring > colorization. Colorization should only be used in the case where the > translation templates cannot deduce the correct translation from the > models. In a utopian system, no colorizations would be required. > However, I have never encountered such a utopian system in my experience. I don't think it is possible to avoid colorization in general. Some information that strongly affects performance is simply not specified in the OOA. A large amount of optimization can be achieved if one knows something about frequency of create/delete, navigation, etc. for particular objects and relationships. My issue in starting this thread is that I am no longer sure how much colorization is necessary. If it is only necessary for performance, then I think it is quite feasible to define a fairly small set of facts needed for the translation to do its thing in a given environment. The Analyst can supply these facts and leave it to the Architect's translation rules to deal with the details. But if more than performance is an issue, the information becomes potentially open-ended and it may come to pass that the Analyst must know Architectural details or the Architect must know problem space details. > IMHO, a bridge is an important analysis artifact. The analyst > is uniquely qualified to specify the details of how the domains > fit together. I tend to agree with you, but one of the reasons I raised the issue is that it might not necessarily be so. The analyst knows each domain's interface because that is visible within the domain. If the domain interfaces do fit together nicely so that all one has is the wormhole paper's syntactic shift (e.g., millivolts to volts or event E1 to event E8), then all the Analyst really needs to do is state what interfaces match up. One can imagine a general purpose form in the CASE tool that the Analyst fills out for _any_ bridge that allows the bridge to be built automatically. It is only when the bridge starts to get smart that real semantic information (e.g., the sequence of low level interface invocations that correspond to a high level interface) needs to be processed when building it. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com peterf@pathfindersol.com (Peter J. Fontana) writes to shlaer-mellor-users: -------------------------------------------------------------------- At 09:07 AM 1/8/99 -0500, shlaer-mellor-users@projtech.com wrote: >lahman writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >My recent dialog with Whipp and another on OTUG has prompted >me to reconsider what is in a translation. To get us out of >the Holiday Doldrums, I will toss these ruminations out >here. True HS - we all in ESMUG land were so sad when your comforting volumes were no longer out there on the ether to keep us all warm this winter. ;-) > ... >My first question is: is colorization the Analyst's job or >the Architect's job? I have assumed that it is the >Analyst's job because it requires problem space knowledge. >However, the analyst typically needs to know the details of >the Architecture to do this (e.g., the analyst selects a >particular template). We see this role separation handled two different ways. a) small team of peers - this is where all members are "developers". Most focus on analysis, one or two focus on design (including translation), but all converge at translation time, and sit side by side for coloring, integration, debug, etc. These teams are usually less than 5 people. b) larger teams - the largest segment is the analyst/developer, with only overview exposure to design rationale, and does very little coloring and design-level debug. The smaller segment of this team is the group of "senior" analyst/developer and the architecture designer/developers. This group leads the rest in the successful application of the design. They do the identification and development of key optimization strategies. Key coloring efforts are done by this segment. Please notice all roles mentioned above include "developer" are part of the name. This means each individual shares responsibility to do whatever development activity may be necessary to deliver the product. While roles and assignments may emphasize or favor one set of activities over another, analysts can't be shy (or unable) to dig in at the implementation level to shake out answers when other tactics fail. In any case, we strive to make the translation itself as intelligent as possible, reducing the amount of coloring required. >I have always had a problem with this architecture knowledge >that the Analyst must have. I would much prefer to have the >Architect identify the problem space facts needed to make >the decisions for a particular Architecture and have the >Analyst simply provide those facts without knowledge of how >they are used. The Architect would then provide translation >rules that used those facts the Do the Right Thing. While at a theoretical level this type of role separation has some appealing benefits, I'll speculate that the general business context that OOA/RD is applied in does not favor this separation. Simply, it's not cost effective to for a company invest enough in architecure flexibility to allow this. Real world efforts get better results quicker and cheaper when the analysts are "burdened" to contribute when planning the mapping to implementation - at the appropriate time. I'm not saying I don't believe in the rigorous separation of Analysis from Design - quite the contrary. I *am* saying the most individuals primarily tasked with analysis should be required to participate in mapping their models to implementation, and in the subsequent construction, verification, debug, and deployment of the system. >Finally, who does the domain bridges? Again reflecting on the need to keep things simple enough so a *normal* company can effectively apply the method with a *reasonable* investment, I have 4 points on bridging: - design mechanisms: sufficient general purpose connectivity mechanisms must be supported to let one domain talk to another - the actual specification of what is needed by a client is a straightforward specification from the client perspective, and the client is allowed to "know" enought about its server to effectively utilize it. (Like what you need to know to use stdio.h: fopen returns a magic cookie, and later calls to fread must get this cookie...) - the client should know virtually nothing about who its clients are or how they operate - any form of bridging that is not a straightforward function-call invocation or simple translation time mapping should be handled in an explicit interface domain. Again - all non-trivial *work* is done in a domain. You may or may not choose to analyze an interface domain, but bridges are kept simple. _______________________________________________________ Pathfinder Solutions Inc. www.pathfindersol.com | 888-OOA-PATH | | effective solutions for software engineering challenges| | Peter Fontana voice: +01 508-384-1392 | peterf@pathfindersol.com fax: +01 508-384-7906 | _______________________________________________________| lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lynch... > I would not say that such universality is a part > of the very definition of the term "architecture". > This is in line with my preference for targeted, > conceptually clean, limited-purpose architectures. > While such universality is in theory possible to > achieve, it is another thing to actually build it > in a way so as to work for all the applications > it is asked to support. There are code generators that work without colorization for any legal OOA. The performance may sometimes suck, but they do it. It seems to me that a brute force translation of an OOA is always possible. So I see this as an issue of desirability. For performance reasons it is certainly nice to introduce some choices based upon the problem space. I suspect Dave Whipp would like to allow even fancier choices. If it is also desirable to separate the responsibilities and Analyst and Architect, my issue comes down to whether it is possible for a clever Architect to design a generic suite of translation rules that only require a predefined suite of problem space facts from the Analyst to make the right choices. > I was taught that Architecture received > requirements from the OOA across an > OOA-to-architecture bridge. So, more than being > like a bridge, it *is* a bridge-- a requirements > dependency of a lower domain upon a higher. I don't disagree with this per se. But I think it is a deterministic bridge. That is, the Analyst doesn't have to massage it in any way after providing the OOA. Even in a manual translation it becomes a rote activity (funny arrow with two heads => linked list). > My understanding is that the OOA proper does not > specify the complete solution. E.g., > * achieving a particular execution timing > * object persistence > * recovery from faults in the computer hardware > * handling exceptions generated by the hardware, > state-machine engine, database, etc. > * identification of data which is to be protected > by checksum, CRC, etc. and how often it is > to be checked I agree with this. But I see all these things as aspects of the implementation solution. My intent with the phrase "complete, logical solution" was to capture the idea that the basic logic of the solution at the abstract level was complete and could be implemented in any concrete environment (including a herd of people with abacuses, a really big wall, and lots of Post-Its) without change. > >Moreover, bridges often > >require a knowledge of the application semantics (though one > >could get around this by having the Analyst write a > >specification in which case the Architect's role is reduced > >to being a coder). > > I think "knowledge of the application semantics" > is made unnecessary by the service domain's > requirements definition. But maybe I'm missing > something...? But who writes up the domain's requirement definition? The Analyst does. If it is written up with sufficient detail that someone else can write the bridge, then my parenthetical comment is satisfied. However, I think more than the individual domains' requirements are relevant. Each domain has an interface that is exposed in the domain as wormholes. What if the client employs a high level interface while the service provides a low level interface? One has to have semantic knowledge of how to sequence a set of low level interfaces to do the entire high level request in order to write the bridge. The requirements of the individual domains would, at best, only indirectly imply this. This is the sort of thing I had in mind. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > I don't think it would be appropriate to bury critical > parts of the HW design in the translation rules. Depends what you mean by "bury". This seems to imply that you feel the translation rules are somehow less visible/readable than an OOA model. I can agree that this is often true; but I am convinced that this is not an essential (nor desirable) property of an RD based system. Dave. -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be in error. lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Fontana... > >I have always had a problem with this architecture knowledge > >that the Analyst must have. I would much prefer to have the > >Architect identify the problem space facts needed to make > >the decisions for a particular Architecture and have the > >Analyst simply provide those facts without knowledge of how > >they are used. The Architect would then provide translation > >rules that used those facts the Do the Right Thing. > > While at a theoretical level this type of role separation has some appealing > benefits, I'll speculate that the general business context that OOA/RD is > applied in does not favor this separation. > > Simply, it's not cost effective to for a company invest enough in > architecure flexibility to allow this. Real world efforts get better > results quicker and cheaper when the analysts are "burdened" to contribute > when planning the mapping to implementation - at the appropriate time. I am not convinced of this. Right now the Analyst makes the decision directly by saying something like, "Use the template for fixed embedded pointers to instantiate R21" The Analyst has to know at least (a) that there will never be more than N (N small) referenced instances, (b) of all the architectural mechanisms available, the fixed embedded pointers will be most efficient, and (c) the fixed embedded pointer mechanism will properly handle less than N references (i.e., it will check for NULL before navigating). We agree that (b) and (c) are really the Architect's problem. The issue is how much work is it for the Architect to provide a mechanism to read tags on the OOA and make the proper decision? I don't think it is that big a deal because all that is really being done is to make a selection decision based upon quantitative values (the maximum number of instance referenced and a boolean for whether this number is fixed). In principle the decision itself is just a couple of IF statements based upon criteria that the Architect can easily define. The problem lies in the infrastructure (e.g., the tag-reading script) needed to support that decision. If the translator is already sophisticated enough so that the Analyst does not have to actually write any code, then I don't see this marginal cost as a big deal. Attacking this from another viewpoint, I think that if we are to have any hope of using OTS Architectures, then this burden must be the Architect's. The economies of scale inherent in the value of an OTS Architecture demand that this sort of problem be solved once. Very few people today remember when one had to write x = a + a + a instead of x = a * 3 in every application because the compiler was too dumb to optimize the multiplication. Today it is unacceptable to hand optimize code at the statement level and so it should be for commercial Architectures. > >Finally, who does the domain bridges? > - any form of bridging that is not a straightforward function-call > invocation or simple translation time mapping should be handled in an > explicit interface domain. Again - all non-trivial *work* is done in a > domain. You may or may not choose to analyze an interface domain, but > bridges are kept simple. While I agree in principle, I am curious about how far would you push this. [As you may recall our ATLAS interface wound up being 80 KLOC because of the bizarre way ATLAS defines digital.] Let me try a couple of examples... (1) The client provides Volume and Mass but the service wants Density. Do you do the conversion in the bridge? (2) The client domain has a single wormhole for Measure Voltage but the DVM service has a set of Connect, Initialize, and Measure calls to do the same thing. Would you create an interface domain to convert one call to three calls? (3) The service domain (third party HW interface) tends to produce noisy values, so you decide to average three separate measurements before sending the averaged result back to the client. Since only the particular HW is noisy, the client is justifiably not concerned with this. Would you do the averaging in an interface domain? I would tend to do (1) and (2) in the bridge but would create an interface domain to do (3). (Actually, we originally did (3) in the bridge in our first OO pilot project but hindsight indicates this was not a good idea.) FWIW, the rule of thumb that I use is based upon the need for temporary data storage. If you need to accumulate data from multiple calls to one domain before forwarding it to the other domain, an intermediate interface domain is justified. In (3) above, results data from three service calls has to be saved before it is averaged and made available to the client. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com "Lynch, Chris D. SDX" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lahman responding to Lynch: >> I would not say that such universality is a part >> of the very definition of the term "architecture". >> This is in line with my preference for targeted, >> conceptually clean, limited-purpose architectures. >> While such universality is in theory possible to >> achieve, it is another thing to actually build it >> in a way so as to work for all the applications >> it is asked to support. > >There are code generators that work without colorization for any legal OOA. The >performance may sometimes suck, but they do it. It seems to me that a brute force >translation of an OOA is always possible. I think maybe we're talking about different things. What I was referring to was the definition of the term, "architecture", and that such a definition should allow for architectures which place restrictions on OOA's that can be translated with it. Also, my definition of a code generator "working" means that the end product performs adequately in the target environment, given OOA models which conform to the limitations of the generator. I have experience with "successful" translations which fail to meet time and memory constraints. >> My understanding is that the OOA proper does not >> specify the complete solution. E.g., >> * achieving a particular execution timing >> * object persistence >> * recovery from faults in the computer hardware >> * handling exceptions generated by the hardware, >> state-machine engine, database, etc. >> * identification of data which is to be protected >> by checksum, CRC, etc. and how often it is >> to be checked > >I agree with this. But I see all these things as aspects of the implementation >solution. My intent with the phrase "complete, logical solution" was to capture >the idea that the basic logic of the solution at the abstract level was complete >and could be implemented in any concrete environment (including a herd of people >with abacuses, a really big wall, and lots of Post-Its) without change. If you're saying the OOA captures "basic logic" and by implication leaves out some essential details (such as timing), I agree. My opinion is that timing information, to the degree that it is a system or domain requirement, is the bailiwick of the analyst, even if such timing is not expressible in the models. The formal OOA notations are not sufficient, for example, to describe how a stepper motor should be driven, and the architect/implementer, if he doesn't know the fine points of this application, may produce a translation under which the motor fails to work. Speaking facetiously, I would enjoy watching someone try to run a stepper with abacuses! But to answer your original question, I think the analyst could, produce a list of software requirements for the architect to implement, working from a canned checklist of things which do not show up in the models. Such a list could be made part of the Shlaer/Mellor method. -Chris ------------------------------------------- Chris Lynch Abbott Ambulatory Infusion Systems San Diego, Ca LYNCHCD@HPD.ABBOTT.COM ------------------------------------------- peterf@pathfindersol.com (Peter J. Fontana) writes to shlaer-mellor-users: -------------------------------------------------------------------- At 09:43 AM 1/11/99 -0500, shlaer-mellor-users@projtech.com wrote: >lahman writes to shlaer-mellor-users: > ... >While I agree in principle, I am curious about how far would you push this. [As >you may recall our ATLAS interface wound up being 80 KLOC because of the bizarre >way ATLAS defines digital.] In general, we would like the client to specify the requirements it imposes on its servers from a convenient perspective - for it (the client). This means the server rises to the required level of abstraction required by the client, assuming the new level of abstraction is still within the subject matter of the server. > Let me try a couple of examples... > >(1) The client provides Volume and Mass but the service wants Density. Do you >do the conversion in the bridge? Again, we'd like the server to meet the needs of the client, so we kindly ask the server to offer a service with an appropriate profile. If this won't fly for some substantial reason, we may actually see if we can get the client to conform to the server constraints. >(2) The client domain has a single wormhole for Measure Voltage but the DVM >service has a set of Connect, Initialize, and Measure calls to do the same >thing. Would you create an interface domain to convert one call to three calls? Again, assuming it is not too far of a stretch for the server, we have the server do this. If it IS "above" the server, and this is specific to a single client, that client could bundle the server calls in a its own domain-level service - a service of the client domain to be called by the client domain. >(3) The service domain (third party HW interface) tends to produce noisy values, >so you decide to average three separate measurements before sending the averaged >result back to the client. Since only the particular HW is noisy, the client is >justifiably not concerned with this. Would you do the averaging in an interface >domain? Assuming by "third party HW interface" you mean fixed/non-negotiable, then you would write an interface. Your most common need for an "interface" domain is to interface to 3rd party elements. >I would tend to do (1) and (2) in the bridge but would create an interface >domain to do (3). (Actually, we originally did (3) in the bridge in our first >OO pilot project but hindsight indicates this was not a good idea.) > >FWIW, the rule of thumb that I use is based upon the need for temporary data >storage. If you need to accumulate data from multiple calls to one domain >before forwarding it to the other domain, an intermediate interface domain is >justified. In (3) above, results data from three service calls has to be saved >before it is averaged and made available to the client. Not a bad rule. We say basically the same thing: if you have to "remember" something, then it is stored in a domain somewhere. _______________________________________________________ Pathfinder Solutions Inc. www.pathfindersol.com | 888-OOA-PATH | | effective solutions for software engineering challenges| | Peter Fontana voice: +01 508-384-1392 | peterf@pathfindersol.com fax: +01 508-384-7906 | _______________________________________________________| Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- "Lynch, Chris D. SDX" wrote: > ...What I > was referring to was the definition of the term, "architecture", > and that such a definition should allow for architectures which > place restrictions on OOA's that can be translated with it. My definition of architecture does not allow the concept of "can be translated with it": that is the task of the translator. An architecture may be created with an idea of what will be translated onto it. For example, an architect may contruct an implementation of relationships that doesn't support M:M relationships. A translator can ?easily? convert an M:M into two M:1 relationships using the assoc object. An architecture has 2 sources of requirements in addition to the constraints of the implementation techology: the OOA-of-OOA and the application (not the model). It is possible to ignore either of these. . If you concentrate on the OOA-of-OOA requirements then you'll probably have a simple translator . If you concentrate on the application requirements then the translator will have a lot more work to do. Stepping stones may be required (cf. bridges described as domains). Most SM people seem to favour the OOA-of-OOA as the main source of requirements. The requirements not derived from this can be summarised by 2 words: persistance and concurrency. After this macro-architecture decision, the development proceeds along the lines of "what mechanisms might we use to implement objects, attributes, relationships, ...". An alternative approach focuses on the needs of the application For example, an application needs to manipulate facts: facts can be grouped and connected. This might seem like we're doing the same thing; but the architectural modeling process is not biased by the meta-model of the application domain. The distinction is much greater when considering behaviour. This alternative approach decreases the coupling between OOA and the architecture, but increases the ties to the application concept. Most people who construct architectures, either software or hardware, have never heard of OOA. This does not prevent them building architectures. Dave. -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be in error. Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- "Peter J. Fontana" wrote: > In general, we would like the client to specify the requirements it imposes > on its servers from a convenient perspective - for it (the client). This > means the server rises to the required level of abstraction required by the > client, assuming the new level of abstraction is still within the subject > matter of the server. I agree that a client should impose requirements from a convenient perspective. However, forcing the server to conform increases the coupling between the two domains. It also reduces the reusability of servers. My perspective is that a domain defines interfaces from its own perspective. Bridges are used to alter the perspective. If there's a big gap, then an additional domain may be needed. This domain is a bit unusual: its perspective is that its a bridge: so it conforms to both the client and the server. Considering Lahman's 3 cases: 1. client provides Volume and Mass but the service wants Density A bridge should do the conversion. A complication may arise if the client has the concpt of density, but has chosen to use mass & volume for a specific interface. In this case, we don't want to define the conversion twice. This isn't as nasy as it sounds. the client's mass and volume concepts will be visible in the type system: all values have an attribute-domain. I think SM is wrong to associate transforms with objects. They should be associated with the definitions of attribute domains. If this is the case, then the same mechanisms that give a bridge visibility of attribute domains would also provide the transforms. 2. sequence of actions. A sequence can be thought of as a set of states: "connecting", "initialising" and "measuring". This state model could be explicitly defined. It would be placing in a bridge-domain. Its not too difficult to optimise out a state machine where all transition events are replies from wormholes that are activated in the state actions. However, it may be even easier to generate a state machine from a sequential code block. Comparisons will quickly become analagous to the differences between ADFDs and ASL. Which brings us round full circle. If the server provides the services synchronously, then bridge is written to us a block of sequentially constrained ASL or ADFDs. If the server is asynchronous, then a state model is better. 3. Averaging many values I think we're all agreed: you don't do this in a simple bridge. Dave. -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be in error. lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > > I don't think it would be appropriate to bury critical > > parts of the HW design in the translation rules. > > Depends what you mean by "bury". This seems to imply that you > feel the translation rules are somehow less visible/readable > than an OOA model. I can agree that this is often true; but > I am convinced that this is not an essential (nor desirable) > property of an RD based system. Would you refer "hidden" to "buried"? I have no problem with the visibility of translation rules -- my issue is with the audience and level of abstraction. When the Analyst looks at an OOA, I think everything relevant to the abstract problem solution -- in this case a HW design -- should be visible in that OOA. I don't think the Analyst, or anyone else interested in the design logic, should have to look at the translation rules for a particular implementation to grok that solution. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lynch... > I think maybe we're talking about different things. What I > was referring to was the definition of the term, "architecture", > and that such a definition should allow for architectures which > place restrictions on OOA's that can be translated with it. Yes, we were. B-) I'm not sure I like the idea of architectures placing restrictions on what OOAs can be translated. That strikes me as the same thing as a C compiler dictating what sorts of problems can be solved using C. > Also, my definition of a code generator "working" means that the > end product performs adequately in the target environment, given > OOA models which conform to the limitations of the generator. > I have experience with "successful" translations which fail to > meet time and memory constraints. I agree. That's why we use manual generation on most stuff. > If you're saying the OOA captures "basic logic" and by implication > leaves out some essential details (such as timing), I agree. > My opinion is that timing information, to the degree > that it is a system or domain requirement, is the bailiwick > of the analyst, even if such timing is not expressible in > the models. The formal OOA notations are not sufficient, > for example, to describe how a stepper motor should be driven, > and the architect/implementer, if he doesn't know the > fine points of this application, may produce a translation > under which the motor fails to work. Speaking facetiously, > I would enjoy watching someone try to run a stepper with > abacuses! But to answer your original question, I think > the analyst could, produce a list of software requirements for the > architect to implement, working from a canned checklist of things > which do not show up in the models. Such a list could be made > part of the Shlaer/Mellor method. At some point in the other thread I said that one basis for my skepticism about whether S-M could be used for HW design was that the concepts of timing and events in the OOA were quite different than those implemented in HW. *If* one can come up with a suite of OOA abstractions that can be translated consistently to resolve those discrepancies it would be possible. But I am not sanguine about being able to find such abstractions. If not, then I would vote for using an existing description rather than having the Analyst twiddle the translation. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Fontana... > In general, we would like the client to specify the requirements it imposes > on its servers from a convenient perspective - for it (the client). This > means the server rises to the required level of abstraction required by the > client, assuming the new level of abstraction is still within the subject > matter of the server. Sure, that Just Works if one is not reusing domains. But as soon as you move that service domain to an application with a different client you can have a mismatch of interface abstraction. > >(1) The client provides Volume and Mass but the service wants Density. Do you > >do the conversion in the bridge? > > Again, we'd like the server to meet the needs of the client, so we kindly > ask the server to offer a service with an appropriate profile. If this > won't fly for some substantial reason, we may actually see if we can get the > client to conform to the server constraints. This usually happens when the service domain is 3rd party. I don't like the idea of changing my client's innards just because the service's interface is not to my liking. But to address the original question about creating an interface domain, assume that _both_ domains are 3rd party. > >(2) The client domain has a single wormhole for Measure Voltage but the DVM > >service has a set of Connect, Initialize, and Measure calls to do the same > >thing. Would you create an interface domain to convert one call to three > calls? > > Again, assuming it is not too far of a stretch for the server, we have the > server do this. If it IS "above" the server, and this is specific to a > single client, that client could bundle the server calls in a its own > domain-level service - a service of the client domain to be called by the > client domain. Same issue. Again, assume both domains are 3rd party. Do you do an interface domain? > >(3) The service domain (third party HW interface) tends to produce noisy > values, > >so you decide to average three separate measurements before sending the > averaged > >result back to the client. Since only the particular HW is noisy, the > client is > >justifiably not concerned with this. Would you do the averaging in an > interface > >domain? > > Assuming by "third party HW interface" you mean fixed/non-negotiable, then > you would write an interface. Your most common need for an "interface" > domain is to interface to 3rd party elements. I agree that the requirement for a "quiet" measurement is the client's, but even if the service domain was mine, I think I would be reluctant to "raise" the service domain to the client in this case. I would think of the service domain subject matter as a converter to change messages into register R/Ws. The domain abstraction would not have an idea of a Test or even a Voltage -- all it knows about are messages, registers, and bit fields. To provide the averaging the concept of a Measurement would have to be introduced into the domain. I think this changes the subject matter of the domain and I would be reluctant to do so -- especially if some other client in another application wouldn't think the signal was noisy. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com peterf@pathfindersol.com (Peter J. Fontana) writes to shlaer-mellor-users: -------------------------------------------------------------------- At 05:59 PM 1/11/99 -0500, shlaer-mellor-users@projtech.com wrote: >lahman writes to shlaer-mellor-users: >This usually happens when the service domain is 3rd party. I don't like the idea of >changing my client's innards just because the service's interface is not to my >liking. But to address the original question about creating an interface domain, >assume that _both_ domains are 3rd party. If both sides are fixed (3rd party), then you create an interface domain. >Same issue. Again, assume both domains are 3rd party. Do you do an interface >domain? Yes. >I agree that the requirement for a "quiet" measurement is the client's, but even if >the service domain was mine, I think I would be reluctant to "raise" the service >domain to the client in this case. Let's assume you're right in this case (since it *is* your example), and we don't want to force the server to reach that high. Then this averaging would go in an interface domain to provide a "clean" reading to the client domain(s) that need(s) it. In cases where something relatively simple is being shunted off into its own "interface domain", it may seem like there isn't enough substance to justify a "whole domain". Consider two points here: - a domain doesn't have to be "expensive" - your translation rules can detect "simplicity" through the absence of state models and/or object count, and perhaps avoid some of the implementation elements only needed by bigger domains. - once the interface capabilities are in place, they may be useful to other clients - either now or in future releases. _______________________________________________________ Pathfinder Solutions Inc. www.pathfindersol.com | 888-OOA-PATH | | effective solutions for software engineering challenges| | Peter Fontana voice: +01 508-384-1392 | peterf@pathfindersol.com fax: +01 508-384-7906 | _______________________________________________________| lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Fontana... > In cases where something relatively simple is being shunted off into its own > "interface domain", it may seem like there isn't enough substance to justify > a "whole domain". Consider two points here: > > - a domain doesn't have to be "expensive" - your translation rules can > detect "simplicity" through the absence of state models and/or object count, > and perhaps avoid some of the implementation elements only needed by bigger > domains. I think this is more theoretical than practical -- it is a nontrivial problem in the general case. And the Analyst would still have to specify what is going on at a level of detail that is close to writing a bridge. I haven't seen too many OTS CASE tools that do this. B-) > - once the interface capabilities are in place, they may be useful to other > clients - either now or in future releases. Certainly true for the high vs. low abstraction case. However, I see this as similar to object level reuse -- if you try to do it in a formal manner (librarians, adequate documentation, cross referencing, etc.) the infrastructure costs may well outweigh the benefits. I think there is another advantage, though: the ability to use a simulator. A big problem with smart bridges is that the defect rates are higher because you are outside the OOA discipline and you have to build a test harness to unit test them. If you use the domain formalism you can use the CASE tool's simulation facilities to unit test even if the domain simply connects input synchronous services to output synchronous services. OTOH, the CASE tool has a fixed overhead for setting up a domain. If one already has a harness to test bridges (fairly likely because bridges look pretty much alike to a test harness), the simulation argument goes away. [Even if this were the case, I would still do the averaging example in a domain just to heighten visibility.] -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com smf@cix.compulink.co.uk (Mike Finn) writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to lahman... > Sorry I'm tardy -- I was out on vacation.... No problem. > > > > The order in which external events are processed is not changed by > > > > warping. > > > > > True, but the order of external and internal events _is_ changed. > > > > No, this is never true in my architecture. But the order in which > > internal events are generated is changed, which is really the point. > Let's suppose the order that events have been received and are sitting in the queues when warp > is engaged are: > A1 (internal) > B1 (external) > A2 (internal) > W1 (event to start warp processing) > > W2 (event generated during warp processing) > If there were no warp these would be popped and executed as A1, B1, A2, W1, W2. However, when > you introduce the warp deferral of the external queue the order of processing will be A1, A2, > W1, W2, B1 with warp. Therefore you have modified the relative ordering of internal and > external events. This can cause problems if self-directed events are involved (see below). I probably confused the situation by mentioning an external event queue. To simplify the problem, just consider the one internal event queue, which is empty on startup. When an unsolicited external event comes along it is placed on the internal event queue by the bridge to the implementation domain that I mentioned in a previous post. So now we have a single external event on the internal event queue. This is the only time an external event can exist on this queue because external events cannot be placed on the queue asynchronously. The architecture is then set to work on this event which causes a state action to execute thereby generating internal events which cause other state actions to execute and so on. The point here is that only the first event is an external event and all the others are internal. Eventually, all the internal events will get processed and the queue will be empty. It's only then that the task looks in the mailbox to get another message containing another external event and the processing starts up again. > > > It is fair for an OOA to > > > rely upon a problem space constraint that orders the external events. Since self-directed > > > events events take precedence, the OOA might use them to set a state machine to the proper > > > state to accept the second event. > > > > Self-directed events are always internal events, so they're always > > processed before the next external event is even looked at. > True. But let's look at another queue: > B1 (external event) > W1 (event to kick off warp processing) > > A2 (internal event generated by warp processing) > Suppose B1 is directed to instance A and when that action is processed, event A1 would be > generated. Without warp processing the queue would look like > B1 (external event) > W1 (event to kick off warp processing) > A1 (internal event resulting from B1) > A2 (internal event generated by warp processing) > But with warp processing we have W1, A2, B1, A1 as the execution sequence. If A1 and A2 are > self-directed, then this violates the rules because the order of the self directed A1/A2 have > been reversed. [It would be incorrect even if they were not self-directed, so long as they came > from the same source and were targeted to the same destination.] > One can argue that this is not a problem since there is no guarantee that B1 will be processed > in any particular order, even if it is directed at the A1/A2 instance because it comes from a > different source. My issue is that this is a stretch if the architecture would have processed > B1 first if an event counter were used rather than warp -- the model should execute A1/A2 the > same way in both situations. These queues are not vaild for the reasons given above and I think the semantics are a bit suspect too. :-) > > > Suppose some action in the initiated processing generates two events. > > > > I'll assume you mean these events are generated during the warp. > > > > > One is the event that > > > announces it has completed the processing the originator wanted. > > > > The originator does not require (and must not get sent) such an > > event. BTW, states concerned with maintaining the counter in the > > originator are also unnecessary. > I confused things by mixing the warp and non-warp cases. In the non-warp case an event is > generated back to the originator that is counted to see when the loop is done. I am assuming > that this is done in the same action with the generation of an event for other processing. You > would replace that return event to the originator with the warp termination. My point is that > the warp doesn't terminate because there is still an unrelated event on the queue from the > action that should have terminated the processing. There is no action that should terminate the processing. Warp termination is simply the empty internal event queue. I don't understand your point about the unrelated event. Remember, the external event has caused all the subsequent internal events to be generated and processed until the internal event queue became empty. Would you say all the generated internal events were related to the first external event? > > > The other event starts a > > > whole new chain of unrelated (to the originator) processing. > > > > The new chain must be related since it was started during the warp. > Ah, this is where we disagree. I agree that most iterations begin and end at the same instance > and that instance is usually some sort of controller for a grander sequence. In this case the > terminating action does not generate the next event, the originator does after all the return > events are counted or warp returns. > However, it is not uncommon to have to wait for only a subset of processing. In that case warp > needs to terminate when that subset is complete, but another event to complete the overall > processing is generated in the terminating action. > As a trivial example, let's say an instance of object A wants to delete itself after sending off > an event to each of several instances of object B. However, the B instances need to access an > attribute from the A instance. Therefore the A instance can't delete itself until all the B > instances have accessed the data, so it needs to wait for that. [Why not include the data on > the event? Because Dave Whipp doesn't like data on events. Actually, I am just keeping > things simple; a self-delete is a very clear example of the principle.] > However, once the B instances access the data they will merrily continue processing by placing > other events on the queue. Those new events initiate chains of processing that have nothing to > do with the constraint on deleting the A instance. I agree with you about having to wait for only a subset of processing, but I don't really know what you mean by 'terminating action". Would this be the last state action of B to be processed? Your example is a bit too trivial to demonstrate the principle. Instead of B needing to access A, let's have an object C that requires access to A's data. The B state action generates an event to C which generates no further events. Also, let's say it's the A5 state action that sends all the events to B, enters the warp and then sends event A6. Plus, let's have action A6 delete the A instance. So now the internal event queue looks like this over time: A5 => Empty => B1 => B2 => B2 => B3 => B3 => C1 => C1 => C2 => C3 => Empty => A6 => Empty => X1 B2 B3 B3 C1 C1 C2 C2 C3 B3 C1 C2 C3 /\ /\ /\ || || || Warp Warp Wait for Starts Ends external event ============================ Life time of A5 ============================= It's perfectly OK for the B or C instances to continue processing after accessing the data by placing other events on the queue and I would expect this to happen. No matter how many internal events were generated the internal event queue would get emptied and then return from the warp. > > > That unrelated processing will > > > continue to throw events on the queue preventing the exit from warp even after all the > > > processing that the originator was really interested in was done. > > > > No matter how many internal events are created during the warp the > > internal event queue will always empty eventually. This is the only > > condition required for ending the warp. > True, but that may be the termination of application execution. In most situations I can think > of that is probably incorrect behavior. If you recall, the reason you suggested the warp was to > terminate a loop _to allow an event to be generated_ after a subset of processing was > completed. If processing is going to proceed unimpeded until application termination without > doing anything, why would one have needed to count events or warp to begin with? > The empty-queue return can be a curse as well. What if the warp processing has a wait state for > an external event that must be received before processing is complete? The warp will return > prematurely. I don't think the "termination of application execution" is relevant here. In the example above, the event A6 is correctly generated after processing the subset of B and C events. Without the warp it would have been generated immediately after B3 deleting the instance of A prematurely. -- Mike Finn Dark Matter | Email: smf@cix.co.uk Systems Ltd | Voice: +44 (0) 1483 755145 lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Finn... > I probably confused the situation by mentioning an external event > queue. To simplify the problem, just consider the one internal > event queue, which is empty on startup. When an unsolicited > external event comes along it is placed on the internal event queue > by the bridge to the implementation domain that I mentioned in a > previous post. So now we have a single external event on the > internal event queue. This is the only time an external event can > exist on this queue because external events cannot be placed on the > queue asynchronously. This last sentence is the key factoid. I assumed we were talking about a general purpose architecture rather than one tied to a specific application. > > I confused things by mixing the warp and non-warp cases. In the non-warp case an event is > > generated back to the originator that is counted to see when the loop is done. I am assuming > > that this is done in the same action with the generation of an event for other processing. You > > would replace that return event to the originator with the warp termination. My point is that > > the warp doesn't terminate because there is still an unrelated event on the queue from the > > action that should have terminated the processing. > > There is no action that should terminate the processing. Warp > termination is simply the empty internal event queue. I don't > understand your point about the unrelated event. Remember, the > external event has caused all the subsequent internal events to be > generated and processed until the internal event queue became empty. > Would you say all the generated internal events were related to the > first external event? We seem to be talking past one another here. The terminal action of the thread is the last action in the thread where the relevant processing is completed. The key word here is "relevant" because we are only interested in a subset of the application's overall processing, so I will belabor the word in the following... If we used counted events instead of warp, there would be some action that returned an event that indicated that the relevant processing thread was completed. That action is the last or terminal action of the thread. In the ideal warp case no event is returned, but it is still the terminal action where processing is completed and no further events need to be issued to complete the relevant processing. The extension to multiple threads is straightforward. If originally several events were sent to several state machines, all threads would still wind up in some action where the relevant thread was complete, albeit in different state machines. In the non-warp case each of those actions would have issued an event indicating completion. There would be one of those actions that was the last one to execute (more precisely, the last one whose event was counted). That last one is the terminal action, though it might not be deterministic which state machine it would be in a priori. For your warp case you are assuming that the terminal action (or any other action in the relevant thread) does not issue another event to do other, unrelated processing, so the event queue would eventually become empty. If such events are issued, then the event queue may not empty and warp may never return even though the relevant processing is complete. > > As a trivial example, let's say an instance of object A wants to delete itself after sending off > > an event to each of several instances of object B. However, the B instances need to access an > > attribute from the A instance. Therefore the A instance can't delete itself until all the B > > instances have accessed the data, so it needs to wait for that. [Why not include the data on > > the event? Because Dave Whipp doesn't like data on events. Actually, I am just keeping > > things simple; a self-delete is a very clear example of the principle.] > > > However, once the B instances access the data they will merrily continue processing by placing > > other events on the queue. Those new events initiate chains of processing that have nothing to > > do with the constraint on deleting the A instance. > > I agree with you about having to wait for only a subset of > processing, but I don't really know what you mean by 'terminating > action". Would this be the last state action of B to be processed? > > Your example is a bit too trivial to demonstrate the principle. I disagree. The principle can be demonstrated with a single thread because the extension to sending multiple events to two or more B's is trivial. All that matters is that A must have a warp to wait for B to access data before deleting itself while B _must_ issue other events so that the application can complete processing. But let's do it with two B instances. I will use [n] to indicate which instance is involved. First let's do your idealized warp... State A1: sends event E1 to B[1] and B[2]. A must wait until both B[1] and B[2] have accessed A's data before it deletes itself, so it opens a warp. Event E1 for B[1] is processed and State B1[1] completes. Event E1 for B[2] is processed and State B1[2] completes. The queue is empty, so warp returns. State A1 issues event E2 to itself Event E2 is processed and A transitions to State A2, which is the delete state. So far, so good. However, the application has to continue processing, but there are no events on the queue if the queue was empty before entering warp. Suppose B1 places an event on the queue that is unrelated to the delete of A -- it simply continues to the logical flow of processing for the application. Now we have State A1: sends event E1 to B[1] and B[2]. A must wait until both B[1] and B[2] have accessed A's data before it deletes itself, so it opens a warp. Event E1 for B[1] is processed. State B1[1] accesses A's data and places event E3 to C[1] on the queue and completes. Event E1 for B[2] is processed. State B1[2] accesses A's data and places event E3 to C[2] on the queue and completes. At this point it is safe to delete A, so the warp should return (i.e., the relevant processing is done). However, it can't because there are two E3 events in the queue. When the C actions also issue events the processing will continue indefinitely and the warp will not return. In this simplistic example, one would probably fix the problem by issuing the E3 events from A1 after warp returns. But this might not be possible (e.g., the specific event, E3, may depend upon the state of B when E1 can be fielded by multiple states) or practical (e.g., the thread is longer and more complicated so that the work A has to do to figure the right events is prohibitive). > Instead of B needing to access A, let's have an object C that > requires access to A's data. The B state action generates an event > to C which generates no further events. Also, let's say it's the A5 > state action that sends all the events to B, enters the warp and > then sends event A6. Plus, let's have action A6 delete the A > instance. So now the internal event queue looks like this over > time: > > A5 => Empty => B1 => B2 => B2 => B3 => B3 => C1 => C1 => C2 => C3 => Empty => A6 => Empty => X1 > B2 B3 B3 C1 C1 C2 C2 C3 > B3 C1 C2 C3 > /\ /\ /\ > || || || > Warp Warp Wait for > Starts Ends external > event > ============================ Life time of A5 ============================= > > It's perfectly OK for the B or C instances to continue processing > after accessing the data by placing other events on the queue and I > would expect this to happen. No matter how many internal events > were generated the internal event queue would get emptied and then > return from the warp. It is true that the queue eventually will get emptied. But hopefully the example above makes it clear that if the B and/or C actions place events on the queue that are unrelated to the processing _relevant_ to A, then the queue will empty only under one of two conditions: the application terminates (i.e., completes execution) or the application is waiting for an external event. Both situations will be long after the data accesses that are relevant to A have actually completed. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com smf@cix.compulink.co.uk (Mike Finn) writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to lahman... > > previous post. So now we have a single external event on the > > internal event queue. This is the only time an external event can > > exist on this queue because external events cannot be placed on the > > queue asynchronously. > This last sentence is the key factoid. I had to go and find out what "factoid" means: _________________________________________________________________ Hypertext Webster Gateway: "factoid" >From WordNet (r) 1.6 (wn) factoid n 1: something resembling a fact; unverified (often invented) information that is given credibility because it appeared in print 2: a brief (usually one sentence and usually trivial) news item _________________________________________________________________ I take it you had not appreciated this fact about my architecture. > I assumed we were talking about a general purpose architecture > rather than one tied to a specific application. We're not talking about an architecture tied to a specific application. For the most part, any architecture should be capable of executing any OOA model. I may have missled you slightly by saying "This is the only time an external event can exist on this queue...". Of course, this does not just happen at startup, but repeats continuously during the lifetime of the system. The last sentence of my *original* paragraph should have made this clear. > We seem to be talking past one another here. The terminal action of the thread is the last action in > the thread where the relevant processing is completed. The key word here is "relevant" because we are > only interested in a subset of the application's overall processing, so I will belabor the word in the > following... > If we used counted events instead of warp, there would be some action that returned an event that > indicated that the relevant processing thread was completed. That action is the last or terminal > action of the thread. In the ideal warp case no event is returned, but it is still the terminal > action where processing is completed and no further events need to be issued to complete the relevant > processing. > The extension to multiple threads is straightforward. If originally several events were sent to > several state machines, all threads would still wind up in some action where the relevant thread was > complete, albeit in different state machines. In the non-warp case each of those actions would have > issued an event indicating completion. There would be one of those actions that was the last one to > execute (more precisely, the last one whose event was counted). That last one is the terminal action, > though it might not be deterministic which state machine it would be in a priori. > For your warp case you are assuming that the terminal action (or any other action in the relevant > thread) does not issue another event to do other, unrelated processing, so the event queue would > eventually become empty. If such events are issued, then the event queue may not empty and warp may > never return even though the relevant processing is complete. OK, I now understand the terms: _relevant processing_, _terminal action_ and _unrelated processing_. Let's define the term _unrelated event_ as an event that will cause some unrelated processing. Contrary to what you have stated, I'm not assuming actions in the relevant thread do not issue unrelated events to do other unrelated processing. As I said before, I would expect other unrelated processing to normally occur. It's quite acceptable for both relevant and unrelated state actions to be processed during the warp. However, you seem to be implying that the event queue may not empty because an exteral event may give rise to a continuous stream of internal events. With the possible exception of delayed events, this cannot happen with a vaild OOA model. > > Your example is a bit too trivial to demonstrate the principle. > I disagree. The principle can be demonstrated with a single thread because the extension to sending > multiple events to two or more B's is trivial. All that matters is that A must have a warp to wait > for B to access data before deleting itself while B _must_ issue other events so that the application > can complete processing. OK, I can live with that. > But let's do it with two B instances. I will use [n] to indicate which instance is involved. First > let's do your idealized warp... > State A1: sends event E1 to B[1] and B[2]. A must wait until both B[1] and B[2] have accessed A's > data before it deletes itself, so it opens a warp. I don't think you can open a warp, but you can open a wormhole. :-) > Event E1 for B[1] is processed and State B1[1] completes. > Event E1 for B[2] is processed and State B1[2] completes. > The queue is empty, so warp returns. > State A1 issues event E2 to itself > Event E2 is processed and A transitions to State A2, which is the delete state. > So far, so good. However, the application has to continue processing, but there are no events on the > queue if the queue was empty before entering warp. I'm curious to know why you think the application has to continue processing. It's usual for the application to go dormant at this point since it's waiting for the next external event. > Suppose B1 places an event on the queue that is > unrelated to the delete of A -- it simply continues to the logical flow of processing for the > application. Now we have > State A1: sends event E1 to B[1] and B[2]. A must wait until both B[1] and B[2] have accessed A's > data before it deletes itself, so it opens a warp. > Event E1 for B[1] is processed. > State B1[1] accesses A's data and places event E3 to C[1] on the queue and completes. > Event E1 for B[2] is processed. > State B1[2] accesses A's data and places event E3 to C[2] on the queue and completes. > At this point it is safe to delete A, so the warp should return (i.e., the relevant processing is > done). However, it can't because there are two E3 events in the queue. When the C actions also issue > events the processing will continue indefinitely and the warp will not return. You've said it again! It's simply not possible for a vaild OOA model to be self sustaining by continuing to emit events within itself. The warp will return, all threads must die. :-) > > A5 => Empty => B1 => B2 => B2 => B3 => B3 => C1 => C1 => C2 => C3 => Empty => A6 => Empty => X1 > > B2 B3 B3 C1 C1 C2 C2 C3 > > B3 C1 C2 C3 > > /\ /\ /\ > > || || || > > Warp Warp Wait for > > Starts Ends external > > event > > ============================ Life time of A5 ============================= I should make it clear that B1, B2 and B3 are the same events (same name) directed at instances 1, 2 and 3 of B. The same goes for C1, C2 and C3. For the A object, event A5 causes a transition into state A5. Same for A6. > It is true that the queue eventually will get emptied. But hopefully the example above makes it clear > that if the B and/or C actions place events on the queue that are unrelated to the processing > _relevant_ to A, then the queue will empty only under one of two conditions: the application > terminates (i.e., completes execution) or the application is waiting for an external event. Both > situations will be long after the data accesses that are relevant to A have actually completed. It does not matter how long A has to hang around as long as the A instance is not involved with the unrelated processing. -- Mike Finn Dark Matter | Email: smf@cix.co.uk Systems Ltd | Voice: +44 (0) 1483 755145 lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Finn... > Hypertext Webster Gateway: "factoid" > > >From WordNet (r) 1.6 (wn) > > factoid n 1: something resembling a fact; unverified (often invented) > information that is given credibility because it appeared in print 2: > a brief (usually one sentence and usually trivial) news item Fascinating. And I thought I was coining a word. The meaning that I impugned was meant to convey the idea of a single, small fact among several important ones. Marginally related aside... When I was in college I and a couple of fraternity brothers took a humanities course where we played a game in each theme we had to do. The idea was to use a phony word that sounded good and sort of implied what was meant by association and context. The idea was to see how far we could go before getting called on it. I was the only one who hadn't had a query in the margin until the last theme. Alas, I had one frostie too many before doing that theme and I use shrivrhoidal when referring to the glibness of Hegel. So I got to stay after class and explain it personally. Utterly unrelated stream of consciousness aside... My buddy Joey once got a 1 on a humanities quiz. The instructor's note said he only got that because he got his name right. The problem was that Joey was a Physics major and he answered the quiz with assorted things like Schroedinger's equation. Where he went wrong was that the question was about Hegel's dialectic and Joey read it as Hegel's dielectric. True story. [This was typical of Joey; very brilliant but slightly out of phase with the rest of the world. He once wrote the IRS to the effect that he had forgotten to file for two years and he had lost his W2 forms but since they owed him a "little bit" from the previous year and he probably owed them a "little bit" from the current year, they should call it even. He never heard a word back from them, but I will bet it is framed over the Director's desk.] But I digress... > We're not talking about an architecture tied to a specific > application. For the most part, any architecture should be capable > of executing any OOA model. I may have missled you slightly by > saying "This is the only time an external event can exist on this > queue...". Of course, this does not just happen at startup, but > repeats continuously during the lifetime of the system. The last > sentence of my *original* paragraph should have made this clear. If external events can't be placed in the queue asynchronously, then the architecture has made simplifying assumptions that are specific to the application (or a class of similar applications) and its interactions with the outside world. This is true even for the interleaved view of time. [We do basically the same thing -- see FWIW below -- but not in the architecture.] > It's quite acceptable for both relevant and unrelated state actions > to be processed during the warp. However, you seem to be implying > that the event queue may not empty because an exteral event may give > rise to a continuous stream of internal events. With the possible > exception of delayed events, this cannot happen with a vaild OOA > model. No, I am saying that if an action that is part of the relevant processing also generates an unrelated event, then the unrelated event may give rise to a continuous stream of internal events so that the queue will not empty to allow a warp return. > > So far, so good. However, the application has to continue processing, but there are no events on the > > queue if the queue was empty before entering warp. > > I'm curious to know why you think the application has to continue > processing. It's usual for the application to go dormant at this > point since it's waiting for the next external event. I was thinking of unrelated internal processing that needed to complete between external events but you're right, this is a red herring. The action where warp returned would _usually_ place an event on the queue to continue processing -- though my delete example is an exception (see below). > You've said it again! It's simply not possible for a vaild OOA > model to be self sustaining by continuing to emit events within > itself. The warp will return, all threads must die. :-) I think I see the hangup. Your argument seems to be that all practical applications will have to halt at some point to wait for an external event (e.g., GUI message) to determine the next spate of processing. If you do not allow external events to be placed on the queue while warp is engaged, then eventually the queue must empty. There is an implicit assumption in this: that it doesn't matter that the warp may not return until the next external event and my problem is with that assumption. As in the example, only a tiny fraction of that interval's processing may be relevant to the warp, but it may all continue until the next external event because the E3 events could lead to arbitrarily complex processing in that interval. My position is that the OOA may depend upon the warp returning _before_ pausing for the next external event. As a real example of this sort of thing, we have a device driver that will process an entire digital test with a single external event. The bridge reads a binary file to initialize a gaggle of instances and then places a DoIt event on the queue. Before the next external event is processed the driver may process up to 10**5 internal events. [FWIW, in that driver we sometimes basically do a warp for external events, but it is done in the bridge. The bridge is a programmatic API to a C program client and the C program's API call does not return until the processing is complete. The trick is to have the bridge push the current event queue, load an event, and restart the event queue manager. When the queue is empty the manager returns, the original queue is popped, the queue manager is restarted, and the bridge API call returns. However, this has its own set of very dangerous side effects so it works only in specialized situations. B-)] > It does not matter how long A has to hang around as long as the A > instance is not involved with the unrelated processing. This is the crucial assumption that I disagree with. Suppose somewhere in that unrelated processing in my example one needed to generate events to the existing A instances. If the A instance had not been deleted in a timely fashion, an event to the wrong instance will be generated. Worse, since the E2 event is self directed it will be processed before others directed to A, so the instance will not exist when the second event is actually presented. The key issue of the delete example is that A is waiting so that it can do something to itself, rather than to kick off continuing processing -- the unrelated processing is being continued via the E3s and it may later depend upon what A was supposed to do. In this situation it becomes difficult to predict the side effects of A being tardy in doing what it is supposed to do because the E2 and E3 paths are in parallel. Sending an event to a nonexistent instance will probably have noticeable effects, but if A just updates its data there could be very subtle errors if that data is prematurely accessed. Having said this, I have to note my own implicit assumption: we are talking about the interleaved view of time since most people design individual domains this way and leave the major asynchronous issues to external events. In this situation it is possible to assume that the A instance is gone when in an action that is sufficiently downstream of the E3s. In the simultaneous view one would have to have A's delete action send an event to note its demise to some wait state in the unrelated processing to allow the unrelated processing to continue. This is because one knows that the E3 path might be done in parallel so, however unlikely, it could get to the critical point before the E2 path finishes (which is probably why the interleaved view is so attractive). -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com smf@cix.compulink.co.uk (Mike Finn) writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to lahman... Somewhat belated. I've had a cold... [Amusing reminiscences snipped] > If external events can't be placed in the queue asynchronously, then the architecture has made simplifying > assumptions that are specific to the application (or a class of similar applications) and its interactions > with the outside world. This is true even for the interleaved view of time. [We do basically the same > thing -- see FWIW below -- but not in the architecture.] It's true that external events can't be placed in the *internal* event queue asynchronously, but the system as a whole can of course accept outside world messages asynchronously. These will get converted to external events via an external queue. > > It's quite acceptable for both relevant and unrelated state actions > > to be processed during the warp. However, you seem to be implying > > that the event queue may not empty because an exteral event may give > > rise to a continuous stream of internal events. With the possible > > exception of delayed events, this cannot happen with a vaild OOA > > model. > No, I am saying that if an action that is part of the relevant processing also generates an unrelated event, > then the unrelated event may give rise to a continuous stream of internal events so that the queue will not > empty to allow a warp return. What I don't understand is in what circumstances does an unrelated event give rise to a continuous [infinite] stream of internal events? [Other stuff snipped] BTW, I've just had a look through the "4W1H for translation thread". Just what does 4H1W mean? -- Mike Finn Dark Matter | Email: smf@cix.co.uk Systems Ltd | Voice: +44 (0) 1483 755145 lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Finn... > Somewhat belated. I've had a cold... Mine is also belated -- I was in a class for a couple of days. > > If external events can't be placed in the queue asynchronously, then the architecture has made simplifying > > assumptions that are specific to the application (or a class of similar applications) and its interactions > > with the outside world. This is true even for the interleaved view of time. [We do basically the same > > thing -- see FWIW below -- but not in the architecture.] > > It's true that external events can't be placed in the *internal* event queue > asynchronously, but the system as a whole can of course accept outside world > messages asynchronously. These will get converted to external events via an > external queue. I guess I misunderstood what you were saying. I thought you said something to the effect that if your system an external event would not be received until the current one completed processing. > > No, I am saying that if an action that is part of the relevant processing also generates an unrelated event, > > then the unrelated event may give rise to a continuous stream of internal events so that the queue will not > > empty to allow a warp return. > > What I don't understand is in what circumstances does an unrelated event > give rise to a continuous [infinite] stream of internal events? My delete example was exactly that. The A instance needed to delete itself but couldn't until the B instances' actions completed. This is dependent only on the B actions, not the rest of the application processing. The overall application thread of processing continues after A has been deleted and that thread continues due to events generated from those same B instances' actions. If there is no need for the application to wait for an external event, then all of the application's remaining processing would stem from those events. Without warp those B instance actions would issue two events: one back to A that would be counted until the delete is safe and a second to continue the application processing. With warp, only the second event would be generated. However, those events would essentially initiate the remainder of the application's processing and that could be arbitrarily complex. > BTW, I've just had a look through the "4W1H for translation thread". Just > what does 4H1W mean? It is a standard format for process definition where a table is developed with each row being a process step and the column headings are: What (the process step is), When (the process step is executed), Where (the process step takes place), Who (performs the process step), and How (the process step is done). I thought it was standard for all process improvement, but maybe it is special to TQM/CMM. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com smf@cix.compulink.co.uk (Mike Finn) writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to lahman... > > It's true that external events can't be placed in the *internal* event queue > > asynchronously, but the system as a whole can of course accept outside world > > messages asynchronously. These will get converted to external events via an > > external queue. > I guess I misunderstood what you were saying. I thought you said something to the effect that if your system an > external event would not be received until the current one completed processing. Perhaps I should have given you a little more context. Let's assume we have a real-time multi-tasking system which has two tasks of interest. An Application task and a Mouse Driver task (just for demonstration purposes). Tasks may send messages to each other via message queues (mailboxes). The Application task has the application and service domains, the software architecture and its internal event queue. The job of the Mouse Driver task is to detect mouse button presses (how it does this is not important). When a button is pressed the Mouse Driver task sends a message to a message queue where it waits to be read by the Application task. After the Application task has finished processing its last external event (and resulting internal events) it reads the message from the message queue and converts it to an external event which is placed on the internal event queue and processed until the internal event queue is (again) empty. The point I'm trying to make is that the Application task may be in the middle of processing an internal event and at the same time a real world button press message/event is being recorded for the Application task's future use. > > > No, I am saying that if an action that is part of the relevant processing also generates an unrelated event, > > > then the unrelated event may give rise to a continuous stream of internal events so that the queue will not > > > empty to allow a warp return. > > > > What I don't understand is in what circumstances does an unrelated event > > give rise to a continuous [infinite] stream of internal events? > My delete example was exactly that. The A instance needed to delete itself but couldn't until the B instances' > actions completed. This is dependent only on the B actions, not the rest of the application processing. The > overall application thread of processing continues after A has been deleted and that thread continues due to > events generated from those same B instances' actions. If there is no need for the application to wait for an > external event, then all of the application's remaining processing would stem from those events. > Without warp those B instance actions would issue two events: one back to A that would be counted until the delete > is safe and a second to continue the application processing. With warp, only the second event would be > generated. However, those events would essentially initiate the remainder of the application's processing and > that could be arbitrarily complex. In my cold induced haze I missed an important point from your delete example, so I've resurrected part of it below. smf> It does not matter how long A has to hang around as long as the A smf> instance is not involved with the unrelated processing. hsl> This is the crucial assumption that I disagree with. Suppose somewhere in that unrelated processing in my hsl> example one needed to generate events to the existing A instances. If the A instance had not been deleted hsl> in a timely fashion, an event to the wrong instance will be generated. Worse, since the E2 event is self hsl> directed it will be processed before others directed to A, so the instance will not exist when the second hsl> event is actually presented. The point being; I stated the condition that the A instance must not be involved in any further processing during the warp. You seem to disagree but go on to point out how dangerous it would be if events were generated to the A instance! -- Mike Finn Dark Matter | Email: smf@cix.co.uk Systems Ltd | Voice: +44 (0) 1483 755145 'archive.9902' -- lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Finn... > Perhaps I should have given you a little more context. Let's assume > we have a real-time multi-tasking system which has two tasks of > interest. An Application task and a Mouse Driver task (just for > demonstration purposes). Tasks may send messages to each other via > message queues (mailboxes). > > The Application task has the application and service domains, the > software architecture and its internal event queue. The job of the > Mouse Driver task is to detect mouse button presses (how it does > this is not important). > > When a button is pressed the Mouse Driver task sends a message to a > message queue where it waits to be read by the Application task. > > After the Application task has finished processing its last external > event (and resulting internal events) it reads the message from the > message queue and converts it to an external event which is placed > on the internal event queue and processed until the internal event > queue is (again) empty. OK, in this case the only external events are the user's mouse click (asynchronous bridge) and the Application's request (to the Mouse Driver, which is presumably a synchronous bridge) for the next message. I would regard the "external" event created by the Application from the message as an internal event in the Application (i.e., it would be generated based upon an IF in the action that had the wormhole requesting the next message). It would be an external event if the Mouse Driver returned the message asynchronously in response to the Application's request. In any case, the Application is controlling the sequence and is preventing a new event from being presented before the current one's processing thread is completed (i.e., it doesn't ask for a new event until it is ready). In your earlier explanation I misunderstood you to say that the architecture was ensuring that a new external event could not be presented to a domain until the current one's processing thread was completed. Typically architectures present external events to domains as they are encountered (i.e., asynchronously). Under your description that is the case, so there is nothing unusual. > smf> It does not matter how long A has to hang around as long as the A > smf> instance is not involved with the unrelated processing. > > hsl> This is the crucial assumption that I disagree with. Suppose somewhere in that unrelated processing in my > hsl> example one needed to generate events to the existing A instances. If the A instance had not been deleted > hsl> in a timely fashion, an event to the wrong instance will be generated. Worse, since the E2 event is self > hsl> directed it will be processed before others directed to A, so the instance will not exist when the second > hsl> event is actually presented. > > The point being; I stated the condition that the A instance must not > be involved in any further processing during the warp. You seem to > disagree but go on to point out how dangerous it would be if events > were generated to the A instance! Just to clarify, there are two issues here: (1) I was driving on the fact that if the Bs generated events for unrelated processing, then the warp might not return when expected (i.e., when the Bs finished accessing A's data). (2) If the warp does not return in a timely fashion, then the instance will linger after the Analyst might reasonably expect it to be gone. That could result in a side effect. An *example* of the side effect might be to have an event sent to it in that unrelated processing from (1) that would not have been sent if it had been deleted when the Bs were done with it. This could happen if it showed up in a relationship navigation in the unrelated processing. You are correct that it is misleading, though, because of the reference to the E2 event. As postulated in the original example, that could never get on the queue unless warp returned. So to have both the E2 event and an event from an invalid navigation would require doing the navigation before the warp return and generating both events after the warp return. The only way this could happen without gauche modeling would be if a conditional relationship were instantiated as a result of the invalid navigation that was, itself, navigated after the warp returned. Admittedly not too likely and the event generation itself would probably croak. However, having the A respond to the event from the invalid navigation during the unrelated processing would still be a problem. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com smf@cix.compulink.co.uk (Mike Finn) writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to lahman... > > When a button is pressed the Mouse Driver task sends a message to a > > message queue where it waits to be read by the Application task. > > > > After the Application task has finished processing its last external > > event (and resulting internal events) it reads the message from the > > message queue and converts it to an external event which is placed > > on the internal event queue and processed until the internal event > > queue is (again) empty. > OK, in this case the only external events are the user's mouse click (asynchronous bridge) and the Application's request > (to the Mouse Driver, which is presumably a synchronous bridge) for the next message. In OOA terms, the mouse click is the only event shown. Nowhere in the OOA model would you need any sort of request event to the mouse driver domain. > I would regard the "external" > event created by the Application from the message as an internal event in the Application (i.e., it would be generated > based upon an IF in the action that had the wormhole requesting the next message). I think the mouse click event is a proper external event. There is no wormhole since there is no need to request the mouse click event. > It would be an external event if the > Mouse Driver returned the message asynchronously in response to the Application's request. Wouldn't that just make it a solicted external event? Which I don't think it is. > In any case, the Application > is controlling the sequence and is preventing a new event from being presented before the current one's processing > thread is completed (i.e., it doesn't ask for a new event until it is ready). Yes. The Application task knows the thread has completed when the internal event queue becomes empty. > In your earlier explanation I misunderstood you to say that the architecture was ensuring that a new external event > could not be presented to a domain until the current one's processing thread was completed. In this example I would have the bridge to the Mouse Driver handle the job of reading the message queue, sending the event and starting up the architecture. Re: Warping > Just to clarify, there are two issues here: > (1) I was driving on the fact that if the Bs generated events for unrelated processing, then the warp might not return > when expected (i.e., when the Bs finished accessing A's data). > (2) If the warp does not return in a timely fashion, then the instance will linger after the Analyst might reasonably > expect it to be gone. That could result in a side effect. An *example* of the side effect might be to have an event > sent to it in that unrelated processing from (1) that would not have been sent if it had been deleted when the Bs were > done with it. This could happen if it showed up in a relationship navigation in the unrelated processing. Yes, it's certainly the responsibility of the analyst to ensure consistency. -- Mike Finn Dark Matter | Email: smf@cix.co.uk Systems Ltd | Voice: +44 (0) 1483 755145 Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- Mike Finn wrote: > I think the mouse click event is a proper external event. There is > no wormhole since there is no need to request the mouse click event. Surely all external "events" must be notified through wormholes. This is the only way that a domain can be manipulated. It may be an input-only wormholes; but its still a wormhole. The wormhole causes an SDFD to be triggered. This can use accessors and event generators to modify the domain's state. I find that many SDFDs don't actually need to generate events. An attribute access, and possibly an outgoing wormhole, are often sufficient. I can be useful to se the old trick of sending an event; and setting an attribute to say its been sent. This allows the receiving state model to ignore the event and check the attribute later. In OOA, every directed-pair of instances can have an independent event queue. This provides plenty of scope for cleverness in the architecture. One thing that I can't find anywhere is a timing rule for event generators in SDFDs. Instance-to-instance event ordering is always preserved; but are there any similar guarentees for SDFD-to-instance event ordering? Either way, it is perfectly valid to delay event delivery until the warp has completed. As I said previously, the major problem I see with your warp is the deadlock potential. Dave. -- Dave Whipp, Senior Verification Engineer Siemens Microelectronics Inc., San Jose, CA 95131 mailto:David.Whipp@mc.hl.Siemens.de tel. (408) 895 5076 Opinions are my own. Factual statements may be in error. peterf@pathfindersol.com (Peter J. Fontana) writes to shlaer-mellor-users: -------------------------------------------------------------------- At 01:18 PM 2/3/99 -0800, shlaer-mellor-users@projtech.com wrote: >Dave Whipp writes to shlaer-mellor-users: >In OOA, every directed-pair of instances can have an independent >event queue. This provides plenty of scope for cleverness in the >architecture. One thing that I can't find anywhere is a timing >rule for event generators in SDFDs. Instance-to-instance event >ordering is always preserved; but are there any similar guarentees >for SDFD-to-instance event ordering? I do not believe this is ambiguous in the method. The object instance in the client domain that invoked the SDFD is the official state machine source for any events sent from the SDFD, and therefore ordering of events sent from the SDFD falls into two areas: - the same client instance invokes the same service twice, so the events from the SDFD are guaranteed to be received in the order sent - different client instances invoke the same service, and there cannot be any guarantee in the order of when the events generated from the SDFD are received _______________________________________________________ Pathfinder Solutions Inc. www.pathfindersol.com | 888-OOA-PATH | | effective solutions for software engineering challenges| | Peter Fontana voice: +01 508-384-1392 | peterf@pathfindersol.com fax: +01 508-384-7906 | _______________________________________________________| Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- "Peter J. Fontana" wrote: > I do not believe this is ambiguous in the method. The object instance in > the client domain that invoked the SDFD is the official state machine source > for any events sent from the SDFD, and therefore ordering of events sent > from the SDFD falls into two areas: > - the same client instance invokes the same service twice, so the events > from the SDFD are guaranteed to be received in the order sent > - different client instances invoke the same service, and there cannot be > any guarantee in the order of when the events generated from the SDFD are > received The only problem is identifying the "client instance" if the client is a non-OOA domain. Simple atoms like objects or classes may not be appropriate. For example, a state machine may be implemented via delegation and the state pattern. All the different state objects are all part of the same conceptual instance. If the client isn't even OO, then its probably even harder. Even if the client instance is still identified, there is still an ambiguity. Imagine an SDFD that creates objects that record the order in which it recieves messages. At the same time, it generates events to an instance within the domain. If multiple clients invoke wormholes to trigger the SDFD, then will the recorded order of the messages match the order in which the destination instance recieves its events? Dave. -- Dave Whipp, Senior Verification Engineer Siemens Microelectronics Inc., San Jose, CA 95131 mailto:David.Whipp@mc.hl.Siemens.de tel. (408) 895 5076 Opinions are my own. Factual statements may be in error. Ed Wegner writes to shlaer-mellor-users: -------------------------------------------------------------------- > >>>> Peter J. Fontana February 4, 1999 >10:43 am >>> >peterf@pathfindersol.com (Peter J. Fontana) writes to >shlaer-mellor-users: >-------------------------------------------------------------------- > >At 01:18 PM 2/3/99 -0800, shlaer-mellor-users@projtech.com wrote: >>Dave Whipp writes to >shlaer-mellor-users: > >>In OOA, every directed-pair of instances can have an independent >>event queue. This provides plenty of scope for cleverness in the >>architecture. One thing that I can't find anywhere is a timing >>rule for event generators in SDFDs. Instance-to-instance event >>ordering is always preserved; but are there any similar guarentees >>for SDFD-to-instance event ordering? > >I do not believe this is ambiguous in the method. The object >instance in the client domain that invoked the SDFD is the official >state machine source for any events sent from the SDFD, and >therefore ordering of events sent from the SDFD falls into two >areas: >- the same client instance invokes the same service twice, so the >events from the SDFD are guaranteed to be received in the order sent >- different client instances invoke the same service, and there >cannot be any guarantee in the order of when the events generated >from the SDFD are received Good answer, but I think to a different question. For me, the question is: What are the OOA rules regarding event ordering for multiple events sent from within a single invocation of an SDFD...? i.e. will multiple events from a single SDFD invocation to the same instance be received in the same order as they were sent, but all other bets are off? Was this your question, Dave Whipp? Ed Wegner Tait Electronics Ltd ed_wegner@tait.co.nz peterf@pathfindersol.com (Peter J. Fontana) writes to shlaer-mellor-users: -------------------------------------------------------------------- At 07:14 PM 2/3/99 -0800, shlaer-mellor-users@projtech.com wrote: >Dave Whipp writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >"Peter J. Fontana" wrote: > >> I do not believe this is ambiguous in the method. >The only problem is identifying the "client instance" if the client is a >non-OOA domain. As soon as you depart from OOA on the client side, then your Design ("Architecture" for you die hards) dictates what a "sending instace" is - but rest assured that you have one. Basically it is defined by threads of control. If you have function F1 that calls the SDFD, and it runs in thread of control T1, and it also runs in thread of control T2, then you have two "client instances". This may seem simplified, but I believe it covers what is relevant to the "order of receipt" issue. >Imagine an SDFD that creates objects that record the order >in which it recieves messages. At the same time, it generates events >to an instance within the domain. If multiple clients invoke >wormholes to trigger the SDFD, then will the recorded order of the >messages match the order in which the destination instance recieves >its events? Possibly not. Perhaps a safe approach here is to move away from order dependence in the event sent to the actor object. You might store the event payload in a new instance of a "request" object (and their order relative to each other can preserve any needed ordering), and the event to the actor simply indicates that some request is available. A relationship from the actor to the request can serve as the "atttribute" that remembers the request arrival in case the actor happens to ignore the "request arrived" event. This way, only the creation of the request object captures the order. _______________________________________________________ Pathfinder Solutions Inc. www.pathfindersol.com | 888-OOA-PATH | | effective solutions for software engineering challenges| | Peter Fontana voice: +01 508-384-1392 | peterf@pathfindersol.com fax: +01 508-384-7906 | _______________________________________________________| Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- Ed Wegner wrote: > Good answer, but I think to a different question. For me, the > question is: What are the OOA rules regarding event ordering for > multiple events sent from within a single invocation of an SDFD...? > i.e. will multiple events from a single SDFD invocation to the same > instance be received in the same order as they were sent, but all > other bets are off? > > Was this your question, Dave Whipp? Actually, no it wasn't. I am not really happy with specifying sequential constraints between event generators. Its difficult to contrive a situation where its really necessary. There's almost certainly a better way. (please feel free to provide a counter example). MY point was that, however good Peter's answer was, it is an extrapolation beyond the method. The only safe thing to do is to assume that there are no guarentees, unless explicity stated in the method (or, possibly, in a project- specific enhancement). Dave. -- Dave Whipp, Senior Verification Engineer Siemens Microelectronics Inc., San Jose, CA 95131 mailto:David.Whipp@mc.hl.Siemens.de tel. (408) 895 5076 Opinions are my own. Factual statements may be in error. lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Finn... > > > When a button is pressed the Mouse Driver task sends a message to a > > > message queue where it waits to be read by the Application task. > > > > > > After the Application task has finished processing its last external > > > event (and resulting internal events) it reads the message from the > > > message queue and converts it to an external event which is placed > > > on the internal event queue and processed until the internal event > > > queue is (again) empty. > > > OK, in this case the only external events are the user's mouse click (asynchronous bridge) and the Application's request > > (to the Mouse Driver, which is presumably a synchronous bridge) for the next message. > > In OOA terms, the mouse click is the only event shown. Nowhere in > the OOA model would you need any sort of request event to the mouse > driver domain. But you said the Mouse Driver domain is waiting to be read by the Application domain. To do that read the Application has to communicate over a bridge, so there will be a wormhole to process the read request. > > I would regard the "external" > > event created by the Application from the message as an internal event in the Application (i.e., it would be generated > > based upon an IF in the action that had the wormhole requesting the next message). > > I think the mouse click event is a proper external event. There is > no wormhole since there is no need to request the mouse click event. I agreed the mouse click is an external event. I was referring to the event in your 2nd paragraph ("converts it to an external event"). The external event is the message read from the "message queue", which I interpreted to be in Mouse Driver. If the wormhole for the Application's request is synchronous, then the action issuing that request will perform a test on the returned data and issue an internal event, if necessary. If Mouse Driver responds asynchronously, that response will be an external event and the action that fields it will perform a test on the event data and issue an internal event, if necessary. Either way that event is internal. > > It would be an external event if the > > Mouse Driver returned the message asynchronously in response to the Application's request. > > Wouldn't that just make it a solicted external event? Which I don't > think it is. Yes, it would be solicited. The result is _always_ solicited if Mouse Driver waits for Application to read it for mouse activity. The only issue is whether the request is synchronous or asynchronous. > > In any case, the Application > > is controlling the sequence and is preventing a new event from being presented before the current one's processing > > thread is completed (i.e., it doesn't ask for a new event until it is ready). > > Yes. The Application task knows the thread has completed when the > internal event queue becomes empty. I was referring to the fact that Application was controlling when it received _external_ events. By your description, it will only receive the mouse information as a result of a request that it initiates. > > Just to clarify, there are two issues here: > > > (1) I was driving on the fact that if the Bs generated events for unrelated processing, then the warp might not return > > when expected (i.e., when the Bs finished accessing A's data). > > > (2) If the warp does not return in a timely fashion, then the instance will linger after the Analyst might reasonably > > expect it to be gone. That could result in a side effect. An *example* of the side effect might be to have an event > > sent to it in that unrelated processing from (1) that would not have been sent if it had been deleted when the Bs were > > done with it. This could happen if it showed up in a relationship navigation in the unrelated processing. > > Yes, it's certainly the responsibility of the analyst to ensure > consistency. Exactly. My issue is that warp introduces the potential for unruly side effects that would not be present if it were not used. The analyst has to review the design for those side effects. If one used counted events this would not be necessary. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com smf@cix.compulink.co.uk (Mike Finn) writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Dave Whipp... > > I think the mouse click event is a proper external event. There is > > no wormhole since there is no need to request the mouse click event. > Surely all external "events" must be notified through wormholes. This > is the only way that a domain can be manipulated. It may be an > input-only wormholes; but its still a wormhole. You're right. I'm just saying, because the Application does not request the external mouse click event there's no wormhole for the request. Sorry if I haven't answered your question. > In OOA, every directed-pair of instances can have an independent > event queue. This provides plenty of scope for cleverness in the > architecture. One thing that I can't find anywhere is a timing > rule for event generators in SDFDs. Instance-to-instance event > ordering is always preserved; but are there any similar guarentees > for SDFD-to-instance event ordering? I don't use SDFDs myself, but I thought it just acted like a proxy instance. If you want an authoritative answer you could ask Ian Wilkie at Kennedy Carter. Perhaps there's an updated version of his Synchronous Services paper? -- Mike Finn Dark Matter | Email: smf@cix.co.uk Systems Ltd | Voice: +44 (0) 1483 755145 smf@cix.compulink.co.uk (Mike Finn) writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to lahman... > > > > When a button is pressed the Mouse Driver task sends a message to a > > > > message queue where it waits to be read by the Application task. > > > > > > > > After the Application task has finished processing its last external > > > > event (and resulting internal events) it reads the message from the > > > > message queue and converts it to an external event which is placed > > > > on the internal event queue and processed until the internal event > > > > queue is (again) empty. > > > > > OK, in this case the only external events are the user's mouse click (asynchronous bridge) and the Application's request > > > (to the Mouse Driver, which is presumably a synchronous bridge) for the next message. > > > > In OOA terms, the mouse click is the only event shown. Nowhere in > > the OOA model would you need any sort of request event to the mouse > > driver domain. > But you said the Mouse Driver domain is waiting to be read by the Application domain. To do that read the Application has to > communicate over a bridge, so there will be a wormhole to process the read request. When I used the word "read" I was talking about the Mouse Driver task, the Application task and the message queue between them (in the implementation). > > > I would regard the "external" > > > event created by the Application from the message as an internal event in the Application (i.e., it would be generated > > > based upon an IF in the action that had the wormhole requesting the next message). > > > > I think the mouse click event is a proper external event. There is > > no wormhole since there is no need to request the mouse click event. > I agreed the mouse click is an external event. I was referring to the event in your 2nd paragraph ("converts it to an > external event"). This is the same event. I've only talked about the one event so far. > The external event is the message read from the "message queue", which I interpreted to be in Mouse > Driver. The event is contained in the message. It's best to think of the message queue as being between the two tasks. > If the wormhole for the Application's request is synchronous, then the action issuing that request will perform a > test on the returned data and issue an internal event, if necessary. If Mouse Driver responds asynchronously, that response > will be an external event and the action that fields it will perform a test on the event data and issue an internal event, if > necessary. Either way that event is internal. I think you should forget about the Application domain requesting the mouse click event - it's not how I do it. :-) When you model a domain do you always use a wormhole to request an incoming event from a bridge? > > > It would be an external event if the > > > Mouse Driver returned the message asynchronously in response to the Application's request. > > > > Wouldn't that just make it a solicted external event? Which I don't > > think it is. > Yes, it would be solicited. No, the mouse click event is an unsolicited external event. > The result is _always_ solicited if Mouse Driver waits for Application to read it for mouse > activity. The only issue is whether the request is synchronous or asynchronous. The fact that the message (transporting the event) is put on a message queue by the mouse driver task and read sometime later by the Application task does not change the nature of the OOA event. > > > In any case, the Application > > > is controlling the sequence and is preventing a new event from being presented before the current one's processing > > > thread is completed (i.e., it doesn't ask for a new event until it is ready). > > > > Yes. The Application task knows the thread has completed when the > > internal event queue becomes empty. > I was referring to the fact that Application was controlling when it received _external_ events. By your description, it will > only receive the mouse information as a result of a request that it initiates. I agree if you're talking about the Application task reading the message queue. > > > Just to clarify, there are two issues here: > > > > > (1) I was driving on the fact that if the Bs generated events for unrelated processing, then the warp might not return > > > when expected (i.e., when the Bs finished accessing A's data). > > > > > (2) If the warp does not return in a timely fashion, then the instance will linger after the Analyst might reasonably > > > expect it to be gone. That could result in a side effect. An *example* of the side effect might be to have an event > > > sent to it in that unrelated processing from (1) that would not have been sent if it had been deleted when the Bs were > > > done with it. This could happen if it showed up in a relationship navigation in the unrelated processing. > > > > Yes, it's certainly the responsibility of the analyst to ensure > > consistency. > Exactly. You're not going to let this Warp thread die, are you? :-) > My issue is that warp introduces the potential for unruly side effects that would not be present if it were not > used. But there are advantages if you want to use it. > The analyst has to review the design for those side effects. Certainly. > If one used counted events this would not be necessary. But there are drawbacks to this technique. -- Mike Finn Dark Matter | Email: smf@cix.co.uk Systems Ltd | Voice: +44 (0) 1483 755145 lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Finn... > > But you said the Mouse Driver domain is waiting to be read by the Application domain. To do that read the Application has to > > communicate over a bridge, so there will be a wormhole to process the read request. > > When I used the word "read" I was talking about the Mouse Driver > task, the Application task and the message queue between them (in > the implementation). > The event is contained in the message. It's best to think of the > message queue as being between the two tasks. This is what bothers me -- see below. > > > > > I think you should forget about the Application domain requesting > the mouse click event - it's not how I do it. :-) > > When you model a domain do you always use a wormhole to request an > incoming event from a bridge? > > > > The fact that the message (transporting the event) is put on a > message queue by the mouse driver task and read sometime later by > the Application task does not change the nature of the OOA event. I don't think we are that far off with this last tangent. You have chosen to implement the bridge in this fashion during the translation. My issues were at the OOA level. In the OOA the communications are across the bridge. As described, the Application requests a read from the Mouse Driver bridge and the Mouse Driver responds (whether synchronously or asynchronously was not clear). My issue is simply that in the OOA there *must* be a wormhole in the Application domain for that request, regardless of how you handle it in the implementation. If that bridge communication is modeled as synchronous, then there must also be a test on the returned data packet in the Application OOA action to determine if an event needs to be generated because of some mouse activity (i.e., the synchronously returned data packet determines whether there was mouse activity). In that case I would regard the generated event as internal. If that bridge communication is modeled as asynchronous, then there would be a single, subsequent external response event from the Mouse Driver bridge (without an explicit wormhole unless the CASE tool requires a shadow object for the bridge). [I assumed synchronous because it sounded like the data already existed in Mouse Driver so no events would be processed in Mouse Driver to fetch the data.] If your OOA does not fit this pattern, then I believe that you have done significantly more than introduce an event queue manipulation for the warp. B-) Whether the response event is internal (synchronous response) or external (asynchronous response), I would regard it as solicited by the Application domain. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com smf@cix.compulink.co.uk (Mike Finn) writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to lahman... Sorry for the delay. I've had a few other things to deal with. I have taken the liberty of reformatting your text from 132 columns to 80 columns. > > > But you said the Mouse Driver domain is waiting to be read by the Application domain. To do that read the Application has to > > > communicate over a bridge, so there will be a wormhole to process the read request. > > > > When I used the word "read" I was talking about the Mouse Driver > > task, the Application task and the message queue between them (in > > the implementation). > > > The event is contained in the message. It's best to think of the > > message queue as being between the two tasks. > This is what bothers me -- see below. Hmm. There is nothing controversial here. I assumed you had some knowledge of how multi-tasking systems were constructed. Have I assumed wrongly? > > > > > > I think you should forget about the Application domain requesting > > the mouse click event - it's not how I do it. :-) > > > > When you model a domain do you always use a wormhole to request an > > incoming event from a bridge? > > > > > > > > The fact that the message (transporting the event) is put on a > > message queue by the mouse driver task and read sometime later by > > the Application task does not change the nature of the OOA event. > I don't think we are that far off with this last tangent. You have chosen to > implement the bridge in this fashion during the translation. My issues were at > the OOA level. The Application task contains code for the OOA bridge. The bridge does NOT correspond to the "space" between the two tasks. My objective here is to show that it is not neccessary to place external events in the internal event queue asynchronously to produce a general architecture. I'm doing this by demonstrating how the system as a whole can be made to accept real world events asynchronously. This will involve implementation issues. :-) > In the OOA the communications are across the bridge. As described, the > Application requests a read from the Mouse Driver bridge and the Mouse Driver > responds (whether synchronously or asynchronously was not clear). My issue is > simply that in the OOA there *must* be a wormhole in the Application domain for > that request, regardless of how you handle it in the implementation. I'm finding it very difficult to see where you're coming from with this. The idea that an object in a domain must request a read (via a wormhole) from a bridge which then responds with an event is new to me. Could you explain it? > If that bridge communication is modeled as synchronous, then there must also be > a test on the returned data packet in the Application OOA action to determine if > an event needs to be generated because of some mouse activity (i.e., the > synchronously returned data packet determines whether there was mouse activity). > In that case I would regard the generated event as internal. > If that bridge communication is modeled as asynchronous, then there would be a > single, subsequent external response event from the Mouse Driver bridge (without > an explicit wormhole unless the CASE tool requires a shadow object for the > bridge). [I assumed synchronous because it sounded like the data already > existed in Mouse Driver so no events would be processed in Mouse Driver to fetch > the data.] > If your OOA does not fit this pattern, then I believe that you have done > significantly more than introduce an event queue manipulation for the warp. B-) I would be interested to know if all that stuff on warping was of any use to anyone? Email me if you don't want to post here. > Whether the response event is internal (synchronous response) or external > (asynchronous response), I would regard it as solicited by the Application > domain. -- Mike Finn Dark Matter | Email: smf@cix.co.uk Systems Ltd | Voice: +44 (0) 1483 755145 lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Finn... > I have taken the liberty of reformatting your text from 132 columns > to 80 columns. Interesting. My outgoing message lines are wrapped at 60 characters. (I used to do 80 but that doesn't leave room for quote marks.) I'll send you a couple of messages offline with a couple of variants to see if Netscape is working right. > > > The event is contained in the message. It's best to think of the > > > message queue as being between the two tasks. > > > This is what bothers me -- see below. > > Hmm. There is nothing controversial here. I assumed you had some > knowledge of how multi-tasking systems were constructed. Have I > assumed wrongly? Yes, I do. But my problem lies with the way you described the OOA, which should not care whether one is multitasking. The message queue is a specific bridge implementation artifact in the architecture. My issue is strictly with the way you described the events, data, and processing in the OOA. > The Application task contains code for the OOA bridge. The bridge > does NOT correspond to the "space" between the two tasks. These statements make me even more nervous than the first one! If this is your belief, then I don't think you are doing Shlaer-Mellor. Render unto the bridge the things architectural and render unto the Application the things in the problem space. > > In the OOA the communications are across the bridge. As described, the > > Application requests a read from the Mouse Driver bridge and the Mouse Driver > > responds (whether synchronously or asynchronously was not clear). My issue is > > simply that in the OOA there *must* be a wormhole in the Application domain for > > that request, regardless of how you handle it in the implementation. > > I'm finding it very difficult to see where you're coming from with > this. The idea that an object in a domain must request a read (via > a wormhole) from a bridge which then responds with an event is new > to me. Could you explain it? Aha, this may be the core of the disconnect. I suggest you read the Bridges & Wormholes paper on the PT web site. All communications between domains *must* be through a bridge. The manifestation of a bridge within a domain OOA is a wormhole in the ADFD or an incoming external event. (Not quite true: transforms can be implicit synchronous wormholes, but let's not go there.) Given this, the OOA for the communication you described must take one of two forms... Synchronous (which I assumed since it was a simple data read): request data read -> ADFD data flow to wormhole response data from wormhole -> ADFD data flow from wormhole to test process in action Asynchronous: generate request event -> ADFD event flow to wormhole. wormhole places response event w/ data packet on domain's event queue. response event initiates transition -> ADFD data flow to test process in action Either way the response data message comes back *into* the domain explicitly. Under the ADFD rules it then has to be tested by an ADFD test process to determine whether the mouse action warrants the generation of an internal event. If so, the flow from the test process goes to an event generator for the internal event. Having said this, it occurs to me there may be another source of miscommunication here. As I read the original description the client domain requests mouse data from the Mouse Driver and then examines the response message data to determine whether something happened that was interesting enough to warrant generating an internal event. If this is not the case and the Mouse Driver only sends a response event when something interesting happens, then we are in an entirely different situation. Then we have... Asynchronous (only possibility): generate request event -> ADFD event flow to wormhole. wormhole places response event w/ data packet on domain's event queue. response event initiates transition (same one as internal event previously) so the test and the internal event generation are unnecessary. This is more consistent with what you were describing for the OOA -- but this is not a simple data read -- a state machine is active in the Mouse Driver. [The same thing could occur if the bridge were "smart" and interpreted the message data to determine whether an event should be placed on the queue before the wormhole returns. But I think would be a poor choice because it hides processing in the bridge implementation that is almost surely of interest at either the Application's or the Mouse Driver's level of abstraction. It would also render the requested message data as superfluous in the Application domain.] -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Allen Theobald writes to shlaer-mellor-users: -------------------------------------------------------------------- Hi Everyone I think our organization needs a more formal pre-IM stage. Currently our pre-IM stage is very adhoc. I would be interested to know what Shlaer Mellor model builders out there doing to structure their pre-OOA process stage. Maybe in a related question. Requirements are the desired effects to be achieved by the software. Someone has to think up those effects. Someone has to decide that those effects would be good to achieve. In documenting requirements; what (if any) OOA models go into the requirements document? After all, requirements are about the phenomena of the application domain, not about the software being built. So what, exactly, are the contents of the requirements document? I think most OOAs work from fairly detailed requirements, and hardly mention what goes on before hand. Admitedly it's not part of the 'process methodology', but I would like to know your views. Kind Regards, Allen Daniel Dearing writes to shlaer-mellor-users: -------------------------------------------------------------------- Allen (et al) Interested to see your posting. The organisation I am working for are currently in the process of defining requirements for a large scale system development, the requirements of which are far from being well understood. I am attempting to champion a process of Use-case analysis in order to help define the end-users of the system and the things they might want to use the system for. Not having had any prior experience of this particular technique, we can only speculate how successful this will be, and we will have to make up the process as we go. We have already defined much of the high-level requirements and design overview in descriptive documentation but have not nailed down much real functionality. The plan now is to perform a series of brainstorming sessions aimed at identifying the "actors" (the end users of the system) and the "Use-Cases" (the ways in which the actors will use the system). We intend to formally capture these use-cases in an appropriate CASE tool. These initial use-cases will be (dare I say it) elaborated (sorry everyone) by conducting a series of "interviews" with interested parties (including end-user and marketing representatives) to determine the consensus of opinion on their accuracy. Hopefully the next step from here will be to construct some high level sequence diagrams. During this process it is inevitable that some candidates for domains and / or objects will be identified. We may even model things which are more abstract than domains or objects. A natural (?) progression from this point is to identify all the candidate domains and put together a domain chart and then you're off into SM-OOA territory (or UML land if you're that way inclined). I believe that this approach reflects current thinking on how to find a compromise between UML and SM-OOA (but I may have got some of the details wrong). In any case, it seems to us like a good enough way of capturing high level requirements. Word of caution though, I have heard it said that this technique only reveals functional requirements and ignores other, non-functional requirements such as required performance or architectures etc. I'd be interested to hear what other people have to say about this technique. PS I am currently reading the only book I could find on the subject "Applying Use-cases - A Practical Guide", Geri Schneider and Jason P.Winters. If you ignore the UML / Rational bias of the book it gives some useful guidance on selecting use-cases, albeit in a patronising kind of way. >>> Allen Theobald 24/02/99 13:37:14 >>> Allen Theobald writes to shlaer-mellor-users: -------------------------------------------------------------------- Hi Everyone I think our organization needs a more formal pre-IM stage. Currently our pre-IM stage is very adhoc. I would be interested to know what Shlaer Mellor model builders out there doing to structure their pre-OOA process stage. Maybe in a related question. Requirements are the desired effects to be achieved by the software. Someone has to think up those effects. Someone has to decide that those effects would be good to achieve. In documenting requirements; what (if any) OOA models go into the requirements document? After all, requirements are about the phenomena of the application domain, not about the software being built. So what, exactly, are the contents of the requirements document? I think most OOAs work from fairly detailed requirements, and hardly mention what goes on before hand. Admitedly it's not part of the 'process methodology', but I would like to know your views. Kind Regards, Allen Allen Theobald writes to shlaer-mellor-users: -------------------------------------------------------------------- I only include the below quote as a followup to my earlier post, and as point for further discussion on requirements. Ben Kovitz has an interesting book: Practical Software Requirements: A Manual of Content & Style. If found the quote below on usenet. Kind Regards, Allen 15 Dec 1998 Ben Kovitz wrote: "...In my experience object orientation and requirements often make a bad marriage. (That's often, not always.) Of course object orientation is a wonderful set of ways to structure programs and model the real world. But its very strengths in the program-and-models realm tend to cause trouble when talking about the real world directly. For example, the idea of a class and a set of methods that operate on it is a very useful discipline for keeping programs simple and maintainable. But to do this, you have to make many decisions that apply only to the model and correspond to nothing in the real world. A very common and important design decision is: which class gets which methods? Do you give the 'bake' method to the 'cookie' class or to the 'oven' class? In reality, cookies bake when you put them in a hot oven. Neither one of them has the 'bake' operation all to itself. What I've seen happen--probably doesn't happen to everyone, but it happens--is people puzzle over these model-related decisions while writing the requirements. Trying to "be object-oriented" in requirements instead of just trying to be useful often leads people to describe the problem domain in terms of concepts that are appropriate for describing data structures and flow of control in a computer program--which is to say, not to describe the problem domain at all, but to describe a computer model instead. I've seen people give the customer a huge list of classes and message traces and the like, which might all be good program design, but come from an utterly different world than the one that the customer knows about. The customer has no idea what the developers are doing, and most of the benefit of writing requirements is lost. My own view is that requirements are best written *entirely* in terms of the problem domain. That means using no concepts of program structure at all--not concepts of structured programming, not message-passing, not classes and methods, not polymorphism, none of that. (The logical concepts of genus and species apply generally, of course. That's the relation that is sometimes good to model using OO inheritance--and sometimes not.) Doing OO program design well is just a different skill from understanding the problem domain and what would be useful to make happen for the customer there." "Dana Simonson" writes to shlaer-mellor-users: -------------------------------------------------------------------- Leon Starr's book advocates making informal diagrams to describe the system in user terms. We have used this approach to bridge the gap between the written specs and what the authors of the spec really have in mind. <<<>>><<<>>><<<>>><<<>>><<<>>><<<>>><<<>>><<<>>> Dana Simonson Engineering Section Manager EF Johnson Radio Products Division Transcrypt International dsimonson@transcrypt.com www.transcrypt.com >>> Allen Theobald 02/24/99 07:37AM >>> Allen Theobald writes to shlaer-mellor-users: -------------------------------------------------------------------- Hi Everyone I think our organization needs a more formal pre-IM stage. Currently our pre-IM stage is very adhoc. I would be interested to know what Shlaer Mellor model builders out there doing to structure their pre-OOA process stage. Maybe in a related question. Requirements are the desired effects to be achieved by the software. Someone has to think up those effects. Someone has to decide that those effects would be good to achieve. In documenting requirements; what (if any) OOA models go into the requirements document? After all, requirements are about the phenomena of the application domain, not about the software being built. So what, exactly, are the contents of the requirements document? I think most OOAs work from fairly detailed requirements, and hardly mention what goes on before hand. Admitedly it's not part of the 'process methodology', but I would like to know your views. Kind Regards, Allen peterf@pathfindersol.com (Peter J. Fontana) writes to shlaer-mellor-users: -------------------------------------------------------------------- At 11:47 AM 2/24/99 -0500, shlaer-mellor-users@projtech.com wrote: >Allen Theobald writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >I only include the below quote as a followup to my earlier post, and >as point for further discussion on requirements. Ben Kovitz has an >interesting book: Practical Software Requirements: A Manual of Content >& Style. If found the quote below on usenet. >... > My own view is that requirements are best written *entirely* in > terms of the problem domain. That means using no concepts of > program structure at all... This is an excellent quote, and shows quite clearly why in OOA/RD (and MBSE with UML) we separate Analysis from Design. Kovitz is saying that you cannot consider implementation (design) while developing requirements, and doing analysis. _______________________________________________________ Pathfinder Solutions Inc. www.pathfindersol.com | 888-OOA-PATH | | effective solutions for software engineering challenges| | Peter Fontana voice: +01 508-384-1392 | peterf@pathfindersol.com fax: +01 508-384-7906 | _______________________________________________________| "Lynch, Chris D. SDX" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Theobald quoting Kovitz: > "...In my experience object orientation and requirements often make > a bad marriage. (That's often, not always.) Of course object > orientation is a wonderful set of ways to structure programs and > model the real world. But its very strengths in the > program-and-models realm tend to cause trouble when talking about > the real world directly." There is a lot of truth to this, but it should be stressed that concepts borrowed from OOA/OOD/OOP or other formal language *can* be used productively in a requirements document. The trick is not to get carried away with a ill-fitting formalism in the name of rigor. In the end, I think it is helpful to keep the following idea in mind when organizing and writing requirements documents: if the customer cannot readily visualize a working system based on what I have written, the document is fatally flawed. While we as implementers may organize and prepare the document, it is really his to understand and own. -Chris ------------------------------------------- Chris Lynch Abbott Ambulatory Infusion Systems San Diego, Ca LYNCHCD@HPD.ABBOTT.COM ------------------------------------------- Ed Wegner writes to shlaer-mellor-users: -------------------------------------------------------------------- >>> Daniel Dearing February 25, 1999 3:10 am >>> Daniel Dearing writes to shlaer-mellor-users: -------------------------------------------------------------------- >Allen (et al) > >Interested to see your posting. The organisation I am working for >are currently in the process of defining requirements for a large >scale system development, the requirements of which are far from >being well understood. I am attempting to champion a process of >Use-case analysis in order to help define the end-users of the >system and the things they might want to use the system for. > >Not having had any prior experience of this particular technique, we >can only speculate how successful this will be, and we will have to >make up the process as we go. >I'd be interested to hear what other people have to say about this >technique. Interesting.... is there an echo in here? :-) In a posting from one of my colleagues - Clive Horn - a few months back, he mentioned that we were attempting to apply Use-case analysis to the pre-IM stage. Since then, we've stepped up this "pilot" activity on one project, and are investigating and trying out the following: 1. A requirements matrix of Actors in columns and Requirements types in rows, where some examples of requirements types are: Performance Interface Safety Portability etc. but one of them is: Operational (or Behavioural) A key difference we have recognised is that all of the non-operational Requirements type rows define what the product is. And the Operational row defines what the product does. Based on this recognition, we are proceeding as follows: All of the requirements other than the Operational ones are captured in text based or tabular documents. For the operational ones, we are developing Use-Cases and sequence diagrams. We believe we will find - but haven't yet shown - that this level of requirments capture will make domain modelling and IM much less "design oriented" or "method oriented" than it has been for us, and hopefully, much more problem space oriented. Perhaps this is due to our relatively limited problem domain expertise. Other expected positives are an "acceptance" test plan really developed up-front (the use cases with sequence diagrams), and a much more useable set of project planning inputs for integration planning for concurrent engineering (most of our products include significant portions of hardware, mechanical, production design, as well as embedded software. On an entirely different tangent, but in response to the original question from Allan Theobald, we are also beginning to apply some prototyping tools for User Interface definition, modelling, testing, and ultimately requirements capture. Ed Wegner Tait Electronics Ltd Clive Horn writes to shlaer-mellor-users: -------------------------------------------------------------------- Hi I was interested to read that Daniel is working on the development of a large scale system for which the requirements are far from well understood. The questions probably being asked are how will we know when we have all the requirements and even when you've got them, how will you document the information. You may even be attending sessions trying to nail down document structure. I have found that its the information structure that is important not the document structure. I have found, that some kind of requirements coverage matrix is needed. The word coverage is key here. This is not a matrix going down the software development life cycle, this is a matrix looking from the top. This matrix is quite simple with all the actors on one axis and the requirements type down the side. The requirements types include functional, operational, maintenance, acceptance testing etc. The actors are dependent upon your context. I find the word context very important. I try to think about this as the black box. For example, you can think about a product as a black box. within the black box there may be several parts of the system. Each part of the system can be thought of as a black box. Within each part there may be several sub parts which can also be thought of as a black box. The actors you see when you look out from whichever black box you are analysing. In other words, imagine you are standing inside that box and what do you see? Most of my work has been focused of software although my colleagues have been thinking about product. Thinking about software alone as a black box you can make up the requirements matrix and you're actors might be end user, acceptance tester, service technician etc. and the requirements types are functional, operational, interface, etc. I have found that defining all the actors is vital. If you miss one, then you miss a whole column in the table. You must be clear on the definition of the requirements types and I use a book'Software Engineering Standards, by C. Mazza et al, ISBN 0-13-106568-8', section 3.3.2, page 21. Functional is defined as ' what the software must do'. This column across the requirements coverage matrix is totally covered by use cases. On the subject of use cases, I found there are one or Two key papers to understanding the process. These were by Alistair Cockburn. Goals and Use Cases, also Using Goal Based Use Cases. These are on his web site. Also on that web sight was a 'Use Case Dialog'. I had to read these papers several times before the information and the significance of the information really sank in. I have found Alistair Cockburns Concept of the water line and the ship image extremely helpful in structuring the use cases into level. Levelling is one of the most difficult parts in my opinion. It is also key to structuring the information. We got a bit hung up on levels, but Alistair Cockburns idea or summary goals, User Goals and User Tasks proved very helpful. I also view the waterline as the edge of the black box. Before we got into use case analysis, we attempted to do domain analysis. We did this but I never felt sure that we had covered everything. We started the use cases analysis from both top and bottom. In other words both above and below the water line. ( My colleague started at the top and I started at the bottom.). We eventually met and I was pleased to find the use cases below the water line matched up with the bullet point mission statements for the domains. At last I felt we had coverage of the functional requirements. I would be interested to hear how anyone else is getting on with this kind of stuff. If this was at all helpful, then please do drop me an E-mail. We have also used sequence diagramming and done examples from use case to code. I am finding that the combination of the requirements coverage matrix and use cases to capture functional requirements is working well. Kind regards Clive Horn clive_horn@tait.co.nz >>> Daniel Dearing 25/February/1999 03:10am >>> Daniel Dearing writes to shlaer-mellor-users: -------------------------------------------------------------------- Allen (et al) Interested to see your posting. The organisation I am working for are currently in the process of defining requirements for a large scale system development, the requirements of which are far from being well understood. I am attempting to champion a process of Use-case analysis in order to help define the end-users of the system and the things they might want to use the system for. Not having had any prior experience of this particular technique, we can only speculate how successful this will be, and we will have to make up the process as we go. We have already defined much of the high-level requirements and design overview in descriptive documentation but have not nailed down much real functionality. The plan now is to perform a series of brainstorming sessions aimed at identifying the "actors" (the end users of the system) and the "Use-Cases" (the ways in which the actors will use the system). We intend to formally capture these use-cases in an appropriate CASE tool. These initial use-cases will be (dare I say it) elaborated (sorry everyone) by conducting a series of "interviews" with interested parties (including end-user and marketing representatives) to determine the consensus of opinion on their accuracy. Hopefully the next step from here will be to construct some high level sequence diagrams. During this process it is inevitable that some candidates for domains and / or objects will be identified. We may even model things which are more abstract than domains or objects. A natural (?) progression from this point is to identify all the candidate domains and put together a domain chart and then you're off into SM-OOA territory (or UML land if you're that way inclined). I believe that this approach reflects current thinking on how to find a compromise between UML and SM-OOA (but I may have got some of the details wrong). In any case, it seems to us like a good enough way of capturing high level requirements. Word of caution though, I have heard it said that this technique only reveals functional requirements and ignores other, non-functional requirements such as required performance or architectures etc. I'd be interested to hear what other people have to say about this technique. PS I am currently reading the only book I could find on the subject "Applying Use-cases - A Practical Guide", Geri Schneider and Jason P.Winters. If you ignore the UML / Rational bias of the book it gives some useful guidance on selecting use-cases, albeit in a patronising kind of way. >>> Allen Theobald 24/02/99 13:37:14 >>> Allen Theobald writes to shlaer-mellor-users: -------------------------------------------------------------------- Hi Everyone I think our organization needs a more formal pre-IM stage. Currently our pre-IM stage is very adhoc. I would be interested to know what Shlaer Mellor model builders out there doing to structure their pre-OOA process stage. Maybe in a related question. Requirements are the desired effects to be achieved by the software. Someone has to think up those effects. Someone has to decide that those effects would be good to achieve. In documenting requirements; what (if any) OOA models go into the requirements document? After all, requirements are about the phenomena of the application domain, not about the software being built. So what, exactly, are the contents of the requirements document? I think most OOAs work from fairly detailed requirements, and hardly mention what goes on before hand. Admitedly it's not part of the 'process methodology', but I would like to know your views. Kind Regards, Allen lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Theobald... > I think our organization needs a more formal pre-IM stage. Currently > our pre-IM stage is very adhoc. I would be interested to know what > Shlaer Mellor model builders out there doing to structure their > pre-OOA process stage. I'm not sure how useful this answer will be because what we do is highly colorized, to coin a phrase, by our development environment. The Corporation has adopted a development process that applies to all products, hardware and software, that we must adhere to. It is basically a waterfall with nine phases: Statement of Need; Preliminary Requirements; Product Definition; Design; and five others that aren't relevant. These are sufficiently vague so that there is a lot of latitude. One can start work on later phases prior to transitioning out of an earlier phase, but from a project management view everyone must have completed a phase before the project officially transitions to the next phase. The waterfall is not as bad as its reputation in our case because the hardware lead times are so great (we use a lot of custom ASICs and gate arrays). You simply can't significantly change the product hardware requirements or you have a minimum of a year's delay. Since the software's main purpose in life is to run the hardware, there is a certain broad scale stability to the requirements that allows the waterfall to work. Also, we are in the electronic test business, and the basics of that business hasn't changed a lot in two decades -- every type of testing done today was done a decade ago; all that has changed is the underlying technical sophistication. This ensures a degree of familiarity with the requirements as well as stability. Though the waterfall isn't disastrous in our case, it still ain't a great fit. We do the DC and IM in the Product Definition phase while the rest of the OOA is done in Design. The reason the DC and IM are done earlier is that we have to commit to schedule to the nearest week at the end of that phase (the main source of waterfall misfit). The only way we can get sufficiently accurate estimates for this is if we know exactly how many objects will be in the system. The Statement of Need phase is not too relevant here; it is basically the process of making sure there will be a market for the product. Some pre-OOA work begins in the Preliminary Requirements phase where requirements are scrubbed and the project scope is estimated. By the standards of any text on requirements management our requirements suck. Basically they are just a feature list (e.g., "needs guided probe diagnostics"). Worse, we have even been in situations where we were legally prohibited from asking the client what they really meant due to the vagaries of a bidding process. Our schedules are necessarily aggressive because we tend to try to fund R&D with specific contracts. Those customers typically define a fixed date that is out of our control. So the requirements scrubbing tends to degenerate into figuring out the minimum deliverables for the feature set. Fortunately this is mitigated by the fact that we already know pretty much everything that should go into the product -- the hardware bit sheets will be different but the bits still do the same things. Therefore, when we exit this phase we still don't have anything close to what the Requirements Gurus would consider a remotely acceptable SOR document. So all the real pre-OOA work winds up getting done in the Product Definition phase. Here we produce a Functional Specification for the software portion of the product. The FSpec is a highly detailed, black box, user's view of the software. We accurately describe the software inputs (GUIs, APIs, etc.) and outputs (what the hardware does, reports, etc.) down to control labels in the GUI, error message that will be generated, and tester pin states. This is the document that Marketing, end users, and everyone else signs off. It effectively becomes the detailed SOR as well as the functional description of the software. And it is what we use as a basis for the OOA. FWIW, I tend to regard the DC as pre-OOA, though it is technically part of the OOA. To develop it you are really doing high level systems engineering rather than application design. I think this may be one reason why developers often do not pay enough attention to it; developers don't feel like they are designing until they are dealing with objects so they rush to get there. I think that work on the DC should begin as soon as one has a feature list -- having a clear vision of the domains on the DC is crucial to having correct abstractions within those domains. The DC is also the major tool for allocating the requirements throughout the application. With each passing year I see the DC as more important and worth more time, despite it being the simplest diagram of the bunch. Screwing up the DC may be the only way to get into rewrite-and-resubmit mode using S-M. > Maybe in a related question. Requirements are the desired effects to > be achieved by the software. Someone has to think up those effects. > Someone has to decide that those effects would be good to achieve. In > documenting requirements; what (if any) OOA models go into the > requirements document? After all, requirements are about the > phenomena of the application domain, not about the software being > built. So what, exactly, are the contents of the requirements > document? > > I think most OOAs work from fairly detailed requirements, and hardly > mention what goes on before hand. Admitedly it's not part of the > 'process methodology', but I would like to know your views. Leslie Munday has an interesting spin on this topic. He regards the OOA as the requirements documentation for the application. He is not involved in actually developing applications; he uses the OOA to formalize the requirements and then turns it over to developers as an SOR. [I am surprised he has not jumped in here; I thought he was lurking. If he doesn't, then you might want to contact him directly with your question at lmunday@gmswireless.com.] I don't completely subscribe to this view because still I tend to think of an OOA as a software design, albeit quite abstract. However, I certainly understand his viewpoint. The abstractions of the OOA are closer to mathematical structure than software structure. Thus it could be viewed as a sort of formal method for expressing requirements. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Dearing... > Interested to see your posting. The organisation I am working for are > currently in the process of defining requirements for a large scale > system development, the requirements of which are far from being well > understood. I am attempting to champion a process of Use-case analysis > in order to help define the end-users of the system and the things they > might want to use the system for. > Hopefully the next step from here will be to construct some high level > sequence diagrams. During this process it is inevitable that some > candidates for domains and / or objects will be identified. We may even > model things which are more abstract than domains or objects. A natural > (?) progression from this point is to identify all the candidate domains > and put together a domain chart and then you're off into SM-OOA > territory (or UML land if you're that way inclined). This is an interesting approach but I think some caution is needed. Use cases tend to allocate responsibilities in a manner that is very close to functional decomposition. This could lead to a hierarchy of domains that is based on successively more detailed views of the same processing. While this is consistent with the S-M view that domains can represent different levels of abstraction and it can be viewed as a means of identifying requirements allocations across bridges, it is not consistent with the view that domains should describe different subject matters. Also, when used for identifying objects, use cases tend to result in a few spider objects with lots of functionality and no significant data that communicate with a host of acolyte objects with relatively trivial functionality. I would worry that the same sort of thing could happen at the domain level as well. Admittedly, getting a handle on exactly what a 'subject matter' is isn't easy because it is not expounded well in the methodology. Other than relating to George Carlin's classic routine about the many different kinds of Stuff there are, I don't have a lot of words of wisdom. FWIW, the major criteria we use for identifying domains are: (1) clearly different entities. This is obvious stuff like diner car menus vs. train schedules. (2) clearly different abstractions. This is things like a train object in the scheduling domain vs. the GUI domain's train icon. (3) potential reuse boundaries. If the same functionality or abstractions might potentially be used in other applications we tend to encapsulate it in a domain. (4) different world view. We have domains that view the test system in terms of registers and fields, but have no concept of a digital test. Other domains understand the semantics of a test but think of the hardware as an abstraction to whom one sends events. (5) different responsibilities. This is a focus on what a domain does. The problem with these rules of thumb is that they work great in obvious cases but aren't real helpful for close calls. Nonetheless, I would argue that use cases really only address (3) and (5). Therefore, I would suggest that if you use use cases you should regard it as an initial pass and then follow up using the other criteria to see if domains need to be split or added. > I believe that this approach reflects current thinking on how to find a > compromise between UML and SM-OOA (but I may have got some of the > details wrong). In any case, it seems to us like a good enough way of > capturing high level requirements. Perhaps. My impression is that use cases are primarily employed by the various flavors of Responsibility based OO methodologies. There are others who do not even regard them as part of OOT. A SMOOA can be expressed as a subset of UML, so it is not clear to me that they are an *essential* bridging mechanism. [That is not to say they aren't useful -- I just don't like using them to identify objects in the IM.] > Word of caution though, I have heard it said that this technique only > reveals functional requirements and ignores other, non-functional > requirements such as required performance or architectures etc. The SMOOA does not address performance or architectural issues either. However, I think the functional bias is limiting for the reasons I gave above. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Allen Theobald writes to shlaer-mellor-users: -------------------------------------------------------------------- > I'm not sure how useful this answer will be because what we do is > highly colorized, to coin a phrase, by our development environment. > The Corporation has adopted a development process that applies to > all products, hardware and software, that we must adhere to. That's where our problem lies! Our company has adopted a combination of two process models. Goldberg & Rubin in "Succeeding with Object" defined them as: Just-do-it Process Model This approach is sometimes referred to as the code-and-fix process model: write some code, use it, and fix the defects discovered as a result of use. Analysis and design are not cosidered to be one of the activities of this process model. Recursive/Parallel Model Analyze a little, design a little, implement a little, test a little. Founded on a quote from Heinlein: "When faced with a problem you do not understand, do any part of it you do understand, then look at it again." :^) Allen Allen Theobald writes to shlaer-mellor-users: -------------------------------------------------------------------- Would anyone be interested in participating in a series of S-M workshops? Each designed to illustrate each aspect of OOA from start to finish (maybe not translation), but with a goal in mind? We could use ESMUG as an OOA/RD virtual classroom. A series of workshops could be scheduled approximately once a month. Workshops would explain the concepts and techniques which S-M OOA/RD recommends to construct software. Each would act as a point of discussion and last one month. Using a fictitious customer wanting fictitious software and/or hardware, a series of workshops might include: ESMUG Workshop on Reguirements Engineering (although this is not strictly S-M) ESMUG Workshop Information Model ESMUG Workshop State Model ESMUG Workshop Process Model ESMUG Workshop on Recursive Design ...you get the picture... Kind Regards, Allen Theobald "Leslie Munday" writes to shlaer-mellor-users: -------------------------------------------------------------------- Starting over .. (see comment further down) - -----Original Message----- - From: owner-shlaer-mellor-users@projtech.com - [mailto:owner-shlaer-mellor-users@projtech.com]On Behalf Of lahman - Sent: Thursday, February 25, 1999 8:37 AM - To: shlaer-mellor-users@projtech.com - Subject: Re: (SMU) Pre IM stage - - - lahman writes to shlaer-mellor-users: - -------------------------------------------------------------------- - - Responding to Theobald... - - > I think our organization needs a more formal pre-IM stage. Currently - > our pre-IM stage is very adhoc. I would be interested to know what - > Shlaer Mellor model builders out there doing to structure their - > pre-OOA process stage. - - I'm not sure how useful this answer will be because what we do is highly - colorized, to coin a phrase, by our development environment. The - Corporation has adopted a development process that applies to all - products, hardware and software, that we must adhere to. It is basically - a waterfall with nine phases: Statement of Need; Preliminary Requirements; - Product Definition; Design; and five others that aren't relevant. These - are sufficiently vague so that there is a lot of latitude. One can start - work on later phases prior to transitioning out of an earlier phase, but - from a project management view everyone must have completed a phase before - the project officially transitions to the next phase. - - The waterfall is not as bad as its reputation in our case because the - hardware lead times are so great (we use a lot of custom ASICs and gate - arrays). You simply can't significantly change the product hardware - requirements or you have a minimum of a year's delay. Since the - software's main purpose in life is to run the hardware, there is a certain - broad scale stability to the requirements that allows the waterfall to - work. Also, we are in the electronic test business, and the basics of - that business hasn't changed a lot in two decades -- every type of testing - done today was done a decade ago; all that has changed is the underlying - technical sophistication. This ensures a degree of familiarity with the - requirements as well as stability. Though the waterfall isn't disastrous - in our case, it still ain't a great fit. - - We do the DC and IM in the Product Definition phase while the rest of the - OOA is done in Design. The reason the DC and IM are done earlier is that - we have to commit to schedule to the nearest week at the end of that phase - (the main source of waterfall misfit). The only way we can get - sufficiently accurate estimates for this is if we know exactly how many - objects will be in the system. - - The Statement of Need phase is not too relevant here; it is basically the - process of making sure there will be a market for the product. Some - pre-OOA work begins in the Preliminary Requirements phase where - requirements are scrubbed and the project scope is estimated. By the - standards of any text on requirements management our requirements suck. - Basically they are just a feature list (e.g., "needs guided probe - diagnostics"). Worse, we have even been in situations where we were - legally prohibited from asking the client what they really meant due to - the vagaries of a bidding process. - - Our schedules are necessarily aggressive because we tend to try to fund - R&D with specific contracts. Those customers typically define a fixed - date that is out of our control. So the requirements scrubbing tends to - degenerate into figuring out the minimum deliverables for the feature - set. Fortunately this is mitigated by the fact that we already know - pretty much everything that should go into the product -- the hardware bit - sheets will be different but the bits still do the same things. - Therefore, when we exit this phase we still don't have anything close to - what the Requirements Gurus would consider a remotely acceptable SOR - document. - - So all the real pre-OOA work winds up getting done in the Product - Definition phase. Here we produce a Functional Specification for the - software portion of the product. The FSpec is a highly detailed, black - box, user's view of the software. We accurately describe the software - inputs (GUIs, APIs, etc.) and outputs (what the hardware does, reports, - etc.) down to control labels in the GUI, error message that will be - generated, and tester pin states. This is the document that Marketing, - end users, and everyone else signs off. It effectively becomes the - detailed SOR as well as the functional description of the software. And it - is what we use as a basis for the OOA. - - FWIW, I tend to regard the DC as pre-OOA, though it is technically part of - the OOA. To develop it you are really doing high level systems - engineering rather than application design. I think this may be one - reason why developers often do not pay enough attention to it; developers - don't feel like they are designing until they are dealing with objects so - they rush to get there. - - I think that work on the DC should begin as soon as one has a feature list - -- having a clear vision of the domains on the DC is crucial to having - correct abstractions within those domains. The DC is also the major tool - for allocating the requirements throughout the application. With each - passing year I see the DC as more important and worth more time, despite - it being the simplest diagram of the bunch. Screwing up the DC may be the - only way to get into rewrite-and-resubmit mode using S-M. - - > Maybe in a related question. Requirements are the desired effects to - > be achieved by the software. Someone has to think up those effects. - > Someone has to decide that those effects would be good to achieve. In - > documenting requirements; what (if any) OOA models go into the - > requirements document? After all, requirements are about the - > phenomena of the application domain, not about the software being - > built. So what, exactly, are the contents of the requirements - > document? - > - > I think most OOAs work from fairly detailed requirements, and hardly - > mention what goes on before hand. Admitedly it's not part of the - > 'process methodology', but I would like to know your views. - [There I am skimming through this rather large e-mail and you do this. Now I'm going to have to go back to the top and read it again, properly.] - Leslie Munday has an interesting spin on this topic. He regards the OOA - as the requirements documentation for the application. He is not involved - in actually developing applications; he uses the OOA to formalize the - requirements and then turns it over to developers as an SOR. [I am - surprised he has not jumped in here; I thought he was lurking. If he - doesn't, then you might want to contact him directly with your question at - lmunday@gmswireless.com.] - I'm too busy trying to keep up with the mail on the other list :-) - I don't completely subscribe to this view because still I tend to think of - an OOA as a software design, albeit quite abstract. However, I certainly - understand his viewpoint. The abstractions of the OOA are closer to - mathematical structure than software structure. Thus it could be viewed - as a sort of formal method for expressing requirements. In all my years of practicing S/M like development processes, only once did the process actually get through to design and implementation. My basic viewpoint is that the only way one can tell if the requirements are complete, correct and consistent is to execute them. In order to do this one is compelled to introduce a little bit of design into the model. Now in order to produce and executable S/M model of the requirements one needs to take the model down to the ADFD level of detail. This is fine so long as the information in the DFD is validatable. I.e. ALL the data described by your DFD is externally visible from a black box POV. What I end up with, at the end of the analysis phase, is an IM, a number of STDs and an even greater number of DFDs (Notice I've dropped the 'A', because I do not always adhere to the S/M rules for drawing ADFDs). If one puts these components together and makes them executable to the customers satisfaction, one can say they have a complete set of requirements. Now on to the design phase. Take your three components the IM, STDs and the DFDs. Strip the bubbles out of the DFDs, keep the input and output flows but break all connections between bubbles. Now throw the IM, the STDs and the ADFDs away. The bubbles are your requirements, the rest of the model was there just to TEST your bubbles. If you really don't want to influence the design in your requirements spec, (i.e. remove all design information), put your bubbles into a blender for 1 - 2 minutes, dependent upon the number of requirements, and completely randomize them before entering into your SRS. In reality one doesn't through the model away. Let's re-use the model as a basis for the design. That is the difference between what I described and the S/M process. By putting the model into the SRS you are adding design information, but since it's in the SRS and there are no requirements in the SRS to say 'one must do the design as shown' the developers are free to ignore the model and just look at the requirements if they wish. Like I say, only once did I get an SRS published which contained a complete S/M model with requirements and that model was subsequently used as the basis for the design. Then I left the company. Does this help? Leslie. Ross Russell writes to shlaer-mellor-users: -------------------------------------------------------------------- Allen Theobald wrote: > Allen Theobald writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Would anyone be interested in participating in a series of S-M > workshops? Each designed to illustrate each aspect of OOA from start > to finish (maybe not translation), but with a goal in mind? > > We could use ESMUG as an OOA/RD virtual classroom. A series of > workshops could be scheduled approximately once a month. Workshops > would explain the concepts and techniques which S-M OOA/RD recommends to > > construct software. Each would act as a point of discussion and last > one month. > > Using a fictitious customer wanting fictitious software and/or > hardware, a series of workshops might include: > > ESMUG Workshop on Reguirements Engineering (although this is not > strictly S-M) > > ESMUG Workshop Information Model > > ESMUG Workshop State Model > > ESMUG Workshop Process Model > > ESMUG Workshop on Recursive Design > > ...you get the picture... > > Kind Regards, > > Allen Theobald Yes, it sounds like a great idea. My application is tape drive development that is real time constrained so I am interested in modeling techniques that lead to the most efficient code possible. Regards, Ross peterf@pathfindersol.com (Peter J. Fontana) writes to shlaer-mellor-users: -------------------------------------------------------------------- At 02:02 PM 2/25/99 -0500, shlaer-mellor-users@projtech.com wrote: >Allen Theobald writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >Would anyone be interested in participating in a series of S-M >workshops? Each designed to illustrate each aspect of OOA from start >to finish (maybe not translation), but with a goal in mind? Hi Allen - I like the idea, but I'm not quite sure who would do what. Who do you propose will organize the workshop and provide the process outline? Do you see a collaborative community effort, a reliance on a vocal expert, or perhaps some other alternative? Interesting... _______________________________________________________ Pathfinder Solutions Inc. www.pathfindersol.com | 888-OOA-PATH | | effective solutions for software engineering challenges| | Peter Fontana voice: +01 508-384-1392 | peterf@pathfindersol.com fax: +01 508-384-7906 | _______________________________________________________| John Hendrix writes to shlaer-mellor-users: -------------------------------------------------------------------- please count me in. regards johnh Allen Theobald wrote: > Allen Theobald writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Would anyone be interested in participating in a series of S-M > workshops? Each designed to illustrate each aspect of OOA from start > to finish (maybe not translation), but with a goal in mind? > > We could use ESMUG as an OOA/RD virtual classroom. A series of > workshops could be scheduled approximately once a month. Workshops > would explain the concepts and techniques which S-M OOA/RD recommends to > > construct software. Each would act as a point of discussion and last > one month. > > Using a fictitious customer wanting fictitious software and/or > hardware, a series of workshops might include: > > ESMUG Workshop on Reguirements Engineering (although this is not > strictly S-M) > > ESMUG Workshop Information Model > > ESMUG Workshop State Model > > ESMUG Workshop Process Model > > ESMUG Workshop on Recursive Design > > ...you get the picture... > > Kind Regards, > > Allen Theobald 'archive.9903' -- smf@cix.compulink.co.uk (Mike Finn) writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to lahman... > Interesting. My outgoing message lines are wrapped at 60 > characters. They are now! [...] > > The Application task contains code for the OOA bridge. The bridge > > does NOT correspond to the "space" between the two tasks. > These statements make me even more nervous than the first > one! If this is your belief, then I don't think you are > doing Shlaer-Mellor. I've certainly put my own spin on the systems and techniques I've developed but I hope I have remained true to The Vision. > Render unto the bridge the things > architectural and render unto the Application the things in > the problem space. I come to bury Caesar, not to praise him. :-) > > > In the OOA the communications are across the bridge. As described, the > > > Application requests a read from the Mouse Driver bridge and the Mouse Driver > > > responds (whether synchronously or asynchronously was not clear). My issue is > > > simply that in the OOA there *must* be a wormhole in the Application domain for > > > that request, regardless of how you handle it in the implementation. > > > > I'm finding it very difficult to see where you're coming from with > > this. The idea that an object in a domain must request a read (via > > a wormhole) from a bridge which then responds with an event is new > > to me. Could you explain it? > Aha, this may be the core of the disconnect. I suggest you > read the Bridges & Wormholes paper on the PT web site. I read this paper when it first came out, but only got to somewhere on page 5. I remember thinking at the time that it was all bit implementational. Since I had already figured out my own Theory of Bridges it didn't matter to me. But, at your suggestion, I have now read it all the way through (and then read it again to answer your questions!). I have to say that I was quite disturbed by some of the things in it. But that's another can of worms. > All communications between domains *must* be through a > bridge. The manifestation of a bridge within a domain OOA > is a wormhole in the ADFD or an incoming external event. I agree. The mouse click event is an incoming external event. The Bridges and Wormholes paper talks about external events but what it really means is inter-domain events. Unfortunately, the paper does not appear to explicitly address how true external events are handled. I assume they just bubble up from the bridge and are received by the domain. No request wormhole is involved. > Given this, the OOA for > the communication you described must take one of two > forms... > Synchronous (which I assumed since it was a simple data read): > request data read -> ADFD data flow to wormhole > response data from wormhole -> ADFD data flow from wormhole to test process in action No, it's definitely asynchronous. The read was from the message queue - a specific bridge implementation artifact in the architecture. > Asynchronous: > generate request event -> ADFD event flow to wormhole. > > wormhole places response event w/ data packet on domain's event queue. > > response event initiates transition -> ADFD data flow to test process in action It's not this one neither. There's no request wormhole. It's an incoming external event. > Either way the response data message comes back *into* the > domain explicitly. Under the ADFD rules it then has to be > tested by an ADFD test process to determine whether the > mouse action warrants the generation of an internal event. > If so, the flow from the test process goes to an event > generator for the internal event. I can see where your coming from now, but still think you wrong. Consider how an unsolicited external event enters a model, without a domain explicitly asking for it (it's unsolicited after all!). Bear in mind that an OOA domain can't be active until it gets an inter-domain event. > Having said this, it occurs to me there may be another > source of miscommunication here. As I read the original > description the client domain requests mouse data from the > Mouse Driver and then examines the response message data to > determine whether something happened that was interesting > enough to warrant generating an internal event. If this is > not the case and Your right, this is not the case. :-) > the Mouse Driver only sends a response > event when something interesting happens, then we are in an > entirely different situation. Then we have... Very close. > Asynchronous (only possibility): > generate request event -> ADFD event flow to wormhole. > > wormhole places response event w/ data packet on domain's event queue. > > response event initiates transition (same one as internal event previously) I thought you had it then, but you're still using a request wormhole for an unsolicited external event. > so the test and the internal event generation are > unnecessary. This is more consistent with what you were > describing for the OOA -- but this is not a simple data read > -- a state machine is active in the Mouse Driver. The Mouse Driver domain is a realized implementation domain and it maps to the Mouse driver task which has its own thread of control. [...] -- Mike Finn Dark Matter | Email: smf@cix.co.uk Systems Ltd | Voice: +44 (0) 1483 755145 lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Theobald... > Would anyone be interested in participating in a series of S-M > workshops? Each designed to illustrate each aspect of OOA from start > to finish (maybe not translation), but with a goal in mind? You can color me as interested. However, I think Fontana has raised a number of issues about the logistics that need answering. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Daniel Dearing writes to shlaer-mellor-users: -------------------------------------------------------------------- Allen, I would be interested in participating, and I guess one or two of my coleagues would also be interested. I admire your enthusiasm! Are you offering to co-ordinate this activity? I suspect there would be lots of work involved in defining the workshops and co-ordinating activities - possibly more than would be fair for one person. Perhaps you could start up a thread to discuss the nature of this exercise and define (and maybe partition off) some of the work involved. Good luck Dan :-) Plextek Limited Allen Theobald writes to shlaer-mellor-users: -------------------------------------------------------------------- I guess this question relates to translation of one-to-one mappings. BTW, does anybody translate by hand? Any way. Assume I have objects A and B, such that A<-->B (1:1). One might code it similar to: #include class A; class B { public: B(int val) : id_(val) { } void attach( A* v ); int id(void) { return id_; } int assoc(void); int id_; A* assoc_; }; class A { public: A(int val, B* m ) : id_(val), assoc_(m) { if (assoc_) assoc_->attach(this); } ~A() { if ( assoc_ ) delete assoc_; } int id(void) { return id_; } int assoc(void) { return assoc_->id(); } int id_; B* assoc_; }; void B::attach( A* v ) { assoc_ = v; } int B::assoc(void) { return assoc_->id(); } void main() { A* a = new A(0, new B(1)); cout << "A id = " << a->id() << "\nA assoc = " << a->assoc() << endl; if ( a ) delete a; } The above allows you to navigate from A to B. But B also "knows" how to get back to A. Now say I add a relationship from B to C such that A<-->B<-->C (1:1:1). How do I modify B to retain its link to A but also adds a link to C? I know this is probably simple. And it probably involves polymorphism and/or templates. Thanks for your input. Kind Regards, Allen Theobald P.S. I haven't forgotten about the workshop idea. I'm just thinking long and hard about how it might be done. "Whipp David (SMI)" writes to shlaer-mellor-users: -------------------------------------------------------------------- [yes, I do still hand code, at times. Generally, though, I use an approach of partial automation] I can't answer your question directly - you don't provide enough information about your architecture. You seem to have included a cascaded deletion feature, which is not part of the base method. Also, it is not obvious where your referential attributes are. So, instead of modifying your code, I'll start afresh with my own implementation of relationships. I'll stick with 1:1 for now, though it should be obvious how to extend it to :M relationships (hint: list get_linked_R1_A() {return A::find_by_ref_b(get_id());}.) This is a very primative translation. The is minimal architecture between the OOA model and the generated code. Also, I am rather old-fashioned, and abhore the link/unlink operators, and the idea that referential attributes are pointers. So I don't use them. One last comment: I assume the model is correct. So there's no defensive code. Assume: A(*id:int, ref_b:int [R1]) B(*id:int, ref_c:int [R2]) C(*id:int) R1= A:B = 1:1 R1= B:C = 1:1 // realisation of B class B { public: // create/delete B(int id, int ref_c) : _id(id), _ref_c(ref_c) { B_map[id] = this; } ~B () {B_map[_id] = 0;} // accessors int get_id() {return _id;} int get_ref_c() {return _ref_c;} int set_ref_c(int new_value) {_ref_c = new_value;} // class-scope accessors static B *find_by_id (int key) {return B_map[key];} static B *find_by_ref_c (int key) {/* iterate over map_B to find B with required value of ref_c */} // navigation A *get_linked_R1_A() {return A::find_by_ref_b(get_id());} C *get_linked_R2_C() {return C::find_by_id(get_ref_c());} private: // internal representation of attributes int _id; int _ref_c; // map of all instances static map > map_B; }; It is obviously very easy to generalize this into a code generation template. But it is equally obvious that it could be pretty inefficient to "find_by_ref_*". Fortunately, its not too hard to improve it by adding a cache. This makes the navigation operators fast after the first naviagation. The performance of the cache will depend on how the system is used. You may need to cache all the links; and you need to keep the cache consistant when other objects are deleted. (i.e. destructors must invalidate the caches in linked objects; and setting referential attibutes must invalidate the cached links on the old value). If navigation is rare, then this overhead may cancel any performance advantage. It may be that we know the linked object at the time we create this object. If we have a cache, then we can pre-initialise the cache (normally, it would start in the cache_invalid state). Dave. > -----Original Message----- > From: Allen Theobald [mailto:theobaam@email.uc.edu] > Sent: Thursday, March 11, 1999 7:07 AM > To: shlaer-mellor-users@projtech.com > Subject: (SMU) Q on Translating > > > Allen Theobald writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > I guess this question relates to translation of one-to-one mappings. > BTW, does anybody translate by hand? Any way. Assume I have objects > A and B, such that A<-->B (1:1). One might code it similar to: > > #include > > class A; > > class B { > public: > B(int val) : id_(val) { } > void attach( A* v ); > int id(void) { return id_; } > int assoc(void); > > int id_; > A* assoc_; > }; > > class A > { > public: > A(int val, B* m ) > : id_(val), assoc_(m) { if (assoc_) assoc_->attach(this); } > ~A() { if ( assoc_ ) delete assoc_; } > int id(void) { return id_; } > int assoc(void) { return assoc_->id(); } > > int id_; > B* assoc_; > }; > > void B::attach( A* v ) { assoc_ = v; } > int B::assoc(void) { return assoc_->id(); } > > void main() > { > A* a = new A(0, new B(1)); > cout << "A id = " << a->id() << "\nA assoc = " << a->assoc() << > endl; > if ( a ) > delete a; > } > > The above allows you to navigate from A to B. But B also "knows" how > to get back to A. Now say I add a relationship from B to C such that > A<-->B<-->C (1:1:1). How do I modify B to retain its link to A but > also adds a link to C? I know this is probably simple. And > it probably > > involves polymorphism and/or templates. Thanks for your input. > > Kind Regards, > > Allen Theobald > > P.S. I haven't forgotten about the workshop idea. I'm just thinking > long and hard about how it might be done. > > "Whipp David (SMI)" writes to shlaer-mellor-users: -------------------------------------------------------------------- In my previous post, please read > // map of all instances > static map > map_B; as: // map of all instances static map > map_B; Dave. p.s. the migration from a map-based architecture to a link-based one would probably make a fairly good work- flow for one of your workshops. Its an interesting, yet simple, sequence of architectural enhancements. -- Dave Whipp, Senior Verification Engineer Siemens Microelectronics Inc. San Jose, CA 95112 tel. (408) 501 6695. mailto:david.whipp@smi.siemens.com Opinions are my own, factual statements may be in error Allen Theobald writes to shlaer-mellor-users: -------------------------------------------------------------------- > // map of all instances > static map > map_B; Odd. Does anyone use Visual C++ 6.0 out there? Static map member variables give the following error under VC++6.0: unresolved external symbol "private: static class std::map... Just curious. Kind Regards, Allen Carolyn Duby writes to shlaer-mellor-users: -------------------------------------------------------------------- I am having success with using maps as static member variables in VC 6.0. Here are two things to check: 1. Make sure to define all of the static members in the .cpp file for the class. In the .h file: #include typedef std::map SymbolTable; class X { private: // declaration of static member variable using an std::map static SymbolTable m_enumTypes; }; In the .cpp file: // define the static member variable - call the default constructor SymbolTable X::m_enumTypes; 2. If you are using the incremental linker, try doing a Rebuild All. The incremental linker gets confused sometimes when you add new member variables. At 07:59 AM 3/16/99 -0500, you wrote: >Allen Theobald writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >> // map of all instances >> static map > map_B; > >Odd. Does anyone use Visual C++ 6.0 out there? Static map >member variables give the following error under VC++6.0: > >unresolved external symbol >"private: static class std::map... > >Just curious. > >Kind Regards, > >Allen > > ________________________________________________ Pathfinder Solutions Inc. www.pathfindersol.com 888-OOA-PATH effective solutions for software engineering challenges Carolyn Duby voice: +01 508-673-5790 carolynd@pathfindersol.com fax: +01 508-384-7906 ________________________________________________ "Whipp David (SMI)" writes to shlaer-mellor-users: -------------------------------------------------------------------- > Allen Theobald writes to shlaer-mellor-users: > > > // map of all instances > > static map > map_B; > > Odd. Does anyone use Visual C++ 6.0 out there? Static map > member variables give the following error under VC++6.0: > > unresolved external symbol > "private: static class std::map... Don't forget to define the variable in the .cpp file. On my GNU compiler, I must omit the "static" keyword from this definition. (I don't know if this is standard-compliant; but is does make sense, sort of.) Alternatively, omit the declaration from the class and simply declare/define the variable as a file-scope variable in the .cpp file. Dave. -- Dave Whipp, Senior Verification Engineer Siemens Microelectronics Inc. San Jose, CA 95112 tel. (408) 501 6695. mailto:david.whipp@smi.siemens.com Opinions are my own, factual statements may be in error Allen Theobald writes to shlaer-mellor-users: -------------------------------------------------------------------- > You seem to have included a cascaded deletion feature, which is not > part of the base method... I may have! :^) The reason you see the code the way it is, is because of this CREATOR pattern: Assign class B the responsibility to create an instance of class C if one of the following is true: o B aggregates C objects o B contains C objects o B records instances of C objects o B closely uses C objects o B has the initializing data that will be passed to C when it is created So, who should be responsible for creating new instances of C objects? Deleting instances of C objects? Kind Regards, Allen Theobald Allen Theobald writes to shlaer-mellor-users: -------------------------------------------------------------------- Thanks to all who pointed me in the right direction! I missed the static initializer (duh!). I just automatically assumed that Microsoft's implementation of STL was flawed (it is -- but not in this case). -Allen "Whipp David (SMI)" writes to shlaer-mellor-users: -------------------------------------------------------------------- > Allen Theobald writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > > You seem to have included a cascaded deletion feature, which is not > > part of the base method... > > I may have! :^) The reason you see the code > the way it is, is because of this CREATOR pattern: > > Assign class B the responsibility to create an instance of class C > if one of the following is true: > > o B aggregates C objects > o B contains C objects > o B records instances of C objects > o B closely uses C objects > o B has the initializing data that will be passed to C when it is > created > > So, who should be responsible for creating new instances of C objects? > Deleting instances of C objects? I'm not sure if this pattern is quite appropriate for SM translation. The application domain model will contain ADFDs; which may include create accessors for C. Remember that an SM model can be transitarily inconsistant. Looking at your criteria, neither of the first two are true. SM uses association, not aggregation not composition. Similarly, B does not record C instances: it merely references them. The 4th critera is a bit wooley. The last is not true: By definiton, SM models are normalised, so B has no information of C. The create accessor processes for C may be realised as calls to a factory object (In the code snippet I gave, I didn't use factories, but simply used class-scope). The is no good reason to use one application object as a factory for another. Its probably best to define architectural classes. Your pattern may be appropriate where B is in the architecture and C is in the application. A variation is to have your translator produce 2 classes for each SM object: one for the factory one one for instances. It is possible that you may find a situation where this sort of pattern will be helpful within the application. In this case, you need to define it in your architecture, and define rules for mapping SM models onto that architecture. That could get quite, um, interesting. I've never attempted it. Dave. -- Dave Whipp, Senior Verification Engineer Siemens Microelectronics Inc. San Jose, CA 95112 tel. (408) 501 6695. mailto:david.whipp@smi.siemens.com Opinions are my own, factual statements may be in error Allen Theobald writes to shlaer-mellor-users: -------------------------------------------------------------------- > Its probably best > to define architectural classes. Your pattern may be appropriate > where B is in the architecture and C is in the application. A > variation is to have your translator produce 2 classes for each > SM object: one for the factory one one for instances. I think i'm following this! Care to elaborate [oh no! the e-word :^)] -Allen "Whipp David (SMI)" writes to shlaer-mellor-users: -------------------------------------------------------------------- > Allen Theobald wrote: > > Its probably best > > to define architectural classes. Your pattern may be appropriate > > where B is in the architecture and C is in the application. A > > variation is to have your translator produce 2 classes for each > > SM object: one for the factory one one for instances. > > I think i'm following this! Care to elaborate [oh no! the e-word :^)] "Do the simplest thing that could possibly work." When working with translation, always keep this rule in mind. If you need something more complex later, then you make the change at the meta level (i.e. change the translator), so the cost is not too high. For that reason, my previous example used a very simple approach for managing the existance of objects: the constructors stored "this" in a class-scoped map; and destructors removed it. When you need something a bit more sophisticated, an object-factory provides a place to implement it. We can define a class: class factory_B { public: factory_B singleton() { static factory_B* the_instance = new factory_B; return the_instance; } B* create_B (int id) { B* the_instance = new B(id); _map_B[id] = the_instance; } void delete_B (B* the_instance) { _map_B[the_instance->id()] = 0; delete the_instance; } private: factory_B() {} // singleton: hide the constructor map > _map_B; }; This can easily be mode into a code-generator template, and extended for extra constructors as required. (The generator for B objects is slightly simpler because we remove the static map). Note that I use the singleton to ensure only one instance of the factory can exist in the system. Once you have this you can start adding features. For example, you could check that you don't create the same id twice. Or you could pre-create all the instances (for finite identifiers) and keep an "exists" flag for each one. This avoids run-time memory management. There are any number of possible extentions. Remember, do the simplest thing that could possibly work. But we can add the caveat that "work" means "meet non-functional requirements". Dave. -- Dave Whipp, Senior Verification Engineer Siemens Microelectronics Inc. San Jose, CA 95112 tel. (408) 501 6695. mailto:david.whipp@smi.siemens.com Opinions are my own, factual statements may be in error "Whipp David (SMI)" writes to shlaer-mellor-users: -------------------------------------------------------------------- Sorry to follow up my own post (again). A couple of points I missed: 1. you'll need to separate the method declarations and definitions into .h and .cpp (or .cc) files. This is to avoid circular dependencies when: 2. You need to implement some find accessors in the factory object - it's the only one that knows all the instances. Dave. -- Dave Whipp, Senior Verification Engineer Siemens Microelectronics Inc. San Jose, CA 95112 tel. (408) 501 6695. mailto:david.whipp@smi.siemens.com Opinions are my own, factual statements may be in error "Stephen J. Mellor" writes to shlaer-mellor-users: -------------------------------------------------------------------- Hello Everyone, Does anyone have a truly convincing example of a many-to-something-to-something associative object? By "convincing" I mean that you cannot model the *same concepts* using a one-to-something-to-something associative and a one-to-many to another object that can be identified by the identifier of the associative and an additional identifying attribute. Thanks. -- steve "Whipp David (SMI)" writes to shlaer-mellor-users: -------------------------------------------------------------------- > "Stephen J. Mellor" wrote: > Does anyone have a truly convincing example of a > many-to-something-to-something associative object? > > By "convincing" I mean that you cannot model the > *same concepts* using a one-to-something-to-something > associative and a one-to-many to another object that > can be identified by the identifier of the associative > and an additional identifying attribute. Perhaps you could come up with a "truely convincing" example of a situation that requires a 1:M:M relationship. i.e. one where the same concepts cannot be modelled with two 1:M relationships into the "associative" object. If your criteria for "truely convincing" is that the concepts *cannot* be modelled using an expanded form then its probably possible to formally prove that no truely convincing examples exist. Probably what you meant to ask for is a situation where it is difficult to make the case for not using the M:(x:y) relationship. The most fruitful source of such examples seems to be situations where you want to record the history of a relationship over time. Imanine a scenario with suspected drug dealers and known drug users. We might record meeting between these in an object: CONTACT(*suspect_id, *user_id, *time) to formalise a relationship between the suspect and user. This would be a M:(M:M) relationship. But it can be modelled in other ways, too. I don't khow why UML doesn't include the concept. Perhaps, with their less rigorous aproach to modeling, it is too easy to abuse the construct. Dave. -- Dave Whipp, Senior Verification Engineer Siemens Microelectronics Inc. San Jose, CA 95112 tel. (408) 501 6695. mailto:david.whipp@smi.siemens.com Opinions are my own, factual statements may be in error Bob Lechner writes to shlaer-mellor-users: -------------------------------------------------------------------- I do not believe such an example really exists. THere are ternary relations that can not be represented by three binary projections, but these can be represented by a pair of binary relations. Any subset of the 3-D product of 3 domains XxYxZ can be accessed as a list of items in the z coordinate, for every (x,y) pair. Supose these items are multiple-valued or arbitrary type. TxihHen they correspond to a child type of a 1:Many relation with parent an entry in the 3D matrix (i.e., these multiple items are the 'many' component of Steve's something to something to one to many decomposition, which I believe always exists. One interesting example is the ternary relation among a single source and destination state and the EventTypes that ENABLE this transition between states. More than one event type can enable each transition. To define this relation, a list of (AND or OR-connected) event types can be attached as a label to the Transition Edge which defines a binary relation on StatesXStates. Each event label is a reference to a unique EventType (declared elsewhere as a complex object with arguments). So two binary TRANSITION and ENABLE relations decompose the ternary relation StateXStatexEventType. Bob Lechner UMass-Lowell lechner@cs.uml.edu > > "Stephen J. Mellor" writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Hello Everyone, > > Does anyone have a truly convincing example of a > many-to-something-to-something associative object? > > By "convincing" I mean that you cannot model the > *same concepts* using a one-to-something-to-something > associative and a one-to-many to another object that > can be identified by the identifier of the associative > and an additional identifying attribute. > > Thanks. > > -- steve > Allen Theobald writes to shlaer-mellor-users: -------------------------------------------------------------------- Bob Lechner wrote... [deletia] In my, Chris Rock in the movie "Rush Hour" voice, "what the hell did you just say?" :^) Kind Regards, Allen Ed Wegner writes to shlaer-mellor-users: -------------------------------------------------------------------- If a convincing example requires a proof that one cannot always CREATE an equivalant one-to-x-y and another many-to-one, then I agree with Whipp and suspect there isn't such an example. My "convincing" argument against any modelled object construct is to ask the problem domain expert whether the object in question is a meaningful abstraction in the problem space. Taking Whipp's example, of the drug dealer, the user, and the time of their contact, we can model this without the M-(M:M) construct: user<<--------->>dealer *user name| *dealer name | | v user/dealer-pair<------>>contact *user name *user name *dealer name *dealer name *time It is quite easy for me to imagine a system whose requirements have absolutely no need for keeping track of any instance of the abstraction "user/dealer-pair" unless there also exists at least one instance of the abstraction "contact" between that pair; i.e. the "user/dealer-pair" abstraction carries with it no attributes other than the identifying ones; and it is meaningless in the problem space without at least one instance of "contact". So, it's identifying attributes are just redundant information carried as an overhead due to limiting the modelling constructs. It may be mathematically equivalant, but in this case, it does so by requiring more "stuff", but adding no value. As an aside, it looks to me that removing the M-(M:M) construct could easily lead to a less-efficient than necessary implementation for this case (or would require a more intelligent set of translation rules) if efficiency of either memory or real-time are constraints. And as Whipp also pointed out, the M-(M:M) construct is commonly seen in systems where it is required to keep track of the history of a relationship over time. This, of course, is often a pattern at the heart of managing contention for scarce resources in real-time systems: in particular, those that are real-time and service critical. My general observation is that this construct doesn't happen often, but when it does extreme care in optimising real-time performance is often called for. So, I'd like to see it retained. Regards, Ed Wegner Software Technology Leader Tait Electronics Ltd. "Stephen J. Mellor" writes to shlaer-mellor-users: -------------------------------------------------------------------- At 09:53 AM 3/25/99 +1200, Ed Wegner wrote: >Ed Wegner writes to shlaer-mellor-users: >-------------------------------------------------------------------- >As an aside, it looks to me that removing >the M-(M:M) construct could easily lead to a >less-efficient than necessary implementation >for this case (or would require a more >intelligent set of translation rules) if >efficiency of either memory or real-time are >constraints. And as Whipp also pointed out, >the M-(M:M) construct is commonly seen in >systems where it is required to keep track >of the history of a relationship over time. >This, of course, is often a pattern at the >heart of managing contention for scarce >resources in real-time systems: in >particular, those that are real-time and >service critical. My general observation is >that this construct doesn't happen often, >but when it does extreme care in optimising >real-time performance is often called for. >So, I'd like to see it retained. As pointed out at the top, this is an aside. However, a rather interesting aside. How do you all feel about removing the construct? If we did, would it really make the translation rules more difficult? -- steve peterf@pathfindersol.com (Peter J. Fontana) writes to shlaer-mellor-users: -------------------------------------------------------------------- At 11:58 AM 3/25/99 -0800, shlaer-mellor-users@projtech.com wrote: >"Stephen J. Mellor" writes to shlaer-mellor-users: >-------------------------------------------------------------------- >How do you all feel about removing the construct? >If we did, would it really make the translation >rules more difficult? I believe that the added complexity it introduces does not balance it's benefit. I believe it should be removed. _______________________________________________________ Pathfinder Solutions Inc. www.pathfindersol.com | 888-OOA-PATH | | effective solutions for software engineering challenges| | Peter Fontana voice: +01 508-384-1392 | peterf@pathfindersol.com fax: +01 508-384-7906 | _______________________________________________________| Jay Case writes to shlaer-mellor-users: -------------------------------------------------------------------- > "Stephen J. Mellor" writes to shlaer-mellor-users: > -------------------------------------------------------------------- > [trim] > How do you all feel about removing the construct? I cast my vote (somewhat obviously :) for retiring the M-(x:y) construct. > If we did, would it really make the translation > rules more difficult? If anything, a translation engines life will become simpler. It is, IMHO, far easier to optimize the 1-M pair which would replace the 'many associator' than the semantics driven permutations which can arise with M-(x:y), and it's evil reflexive twin. >From an analysis perspective, I've never seen, nor been able to fake up, a many-something-or_other that constituted anything more than waving a dead chicken at an incomplete OIM. > > > -- steve Just my two frosties worth... Regards - Jay Case "Whipp David (SMI)" writes to shlaer-mellor-users: -------------------------------------------------------------------- > peterf@pathfindersol.com (Peter J. Fontana) wrote: > >How do you all feel about removing the construct? > >If we did, would it really make the translation > >rules more difficult? > > I believe that the added complexity it introduces does not > balance it's benefit. I believe it should be removed. This complexity depends on your architecture. The type of architecture I described here a few days ago has no extra complexity to handle M-(M:M) relationships (The only change would be to have navigations return sets instead of lists). If a base architecture supports a construct, then you only need to add complex optimisations if the project requires them. If optimisations are needed then the complexity would be similar however you choose to model the problem, assuming you maintain an unbiased model. If anything, as Steve suggested, translation could be more complex without it. I am against removing it. Dave. -- Dave Whipp, Senior Verification Engineer Siemens Microelectronics Inc. San Jose, CA 95112 tel. (408) 501 6695. mailto:david.whipp@smi.siemens.com Opinions are my own, factual statements may be in error Ed Wegner writes to shlaer-mellor-users: -------------------------------------------------------------------- >peterf@pathfindersol.com (Peter J. Fontana) writes to shlaer-mellor-users: >"Stephen J. Mellor" writes to shlaer-mellor-users: SM>How do you all feel about removing the construct? SM>If we did, would it really make the translation SM>rules more difficult? PF>I believe that the added complexity it introduces does not balance PF>it's benefit. I believe it should be removed. Just what complexity being introduced are you talking about? Complexity of describing the problem? Or complexity of deriving a solution? If it's the problem, I still claim that the following: user<<--------->>dealer *user name| *dealer name | | v user/dealer-pair<------>>contact *user name *user name *dealer name *dealer name *time is MORE complex than: user<<--------->>dealer *user name| *dealer name | | v v user/dealer-pair *user name *dealer name *time It also seems obvious to me that coming up with an efficient set of translation rules for the latter is also LESS complex. However, if to sole purpose driving this change is simplicity of modelling, then of course the former is less complex. An alphabet with 25 letters IS less complex with one with 26. However, in this case I still claim that it will make communicating the real requirements - and then translating them MORE complex. Regards, Ed Wegner Software Technology Leader Tait Electronics Ltd. Bob Lechner writes to shlaer-mellor-users: -------------------------------------------------------------------- > > Ed Wegner writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > >peterf@pathfindersol.com (Peter J. Fontana) writes to > shlaer-mellor-users: > ...... > Just what complexity being introduced are you talking about? > Complexity of describing the problem? Or complexity of deriving a > solution? If it's the problem, I still claim that the following: > > user<<--------->>dealer > *user name| *dealer name > | > | > v > user/dealer-pair<------>>contact > *user name *user name > *dealer name *dealer name > *time > is MORE complex than: > > user<<--------->>dealer > *user name| *dealer name > | > | > v > v > user/dealer-pair > *user name > *dealer name > *time > > It also seems obvious to me that coming up with an efficient set of > translation rules for the latter is also LESS complex. However, if > to sole purpose driving this change is simplicity of modelling, then > of course the former is less complex. An alphabet with 25 letters IS > less complex with one with 26. However, in this case I still claim > that it will make communicating the real requirements - and then > translating them MORE complex. > > Regards, > > Ed Wegner > Software Technology Leader > Tait Electronics Ltd. > I agree with both writers. The first model has greater normalization, and the second one is simpler to communicate to the user. But a third criterion is which one is more amenable to changes - which are always part of the 'real' requirements? One criterion of a good approach [to translation?] is support for migration of any subset of attribute value domains between local value representations (time*) and references by means of foreign keys. This migration could be dynamic via alternate runtime views, or static by database conversion. For example, what if the dealer (or a salesman at the dealership) routinely wants to group [some of] his user-contacts first by time then by customer (e.g. for appointment times in the immediate future)? In this case, a ternary relation is an easier starting point - time becomes a third base domain of a '3D-cube' or ternary relation. This 'canonical' model is symmetric among time, dealer and customer - any factoring into one or two binary relations can be derived from it. In my opinion, the real value of N-ary associations is their ability to support such alternate factorings (or views), with arguably less implementation AND specification complexity. At this interface between specifiers and implementors, we can get rid of all M:N attributed relations by transforming them into an intermediate 'middle-ware' canonical view model. The latter includes only primitive or unattributed 1:M relations plus N-ary Associative Entities which absorb N-ary relation attributes. This canonical view decouples one set of alternate user-centered query views or use cases for specifying logical requirements, from another set of physical implementation or representation views among which adaptive optimization can take place. Its simplified graphic notation may even be simpler to communicate with some users. [Caveat: I am not a commercial S-M User, but have been teaching and implementing their OLC method ever since the 1989 SENotes article which preceded their texts - and yes, I am old enough to have used the network model before there were any relational or OO DB's :-)] Bob Lechner UMass-Lowell lechner@cs.uml.edu peterf@pathfindersol.com (Peter J. Fontana) writes to shlaer-mellor-users: -------------------------------------------------------------------- At 01:43 PM 3/26/99 +1200, shlaer-mellor-users@projtech.com wrote: >Ed Wegner writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >>peterf@pathfindersol.com (Peter J. Fontana) writes to >shlaer-mellor-users: > >>"Stephen J. Mellor" writes to >shlaer-mellor-users: > > >SM>How do you all feel about removing the construct? >SM>If we did, would it really make the translation >SM>rules more difficult? > >PF>I believe that the added complexity it introduces does not balance >PF>it's benefit. I believe it should be removed. > >Just what complexity being introduced are you talking about? >Complexity of describing the problem? Or complexity of deriving a >solution? Thank you Bob Lechner for a consice and complete summary of the meta-level assessment of "implementability". As a more tangible level, the overall project gains when trading modeling complxity for translation complexity. Typically there are many more analysts than designers (who create the code templates) - integer multiples. In general analysts tend to have an "average" level of technical competency, and the designers tend to be somewhat more capable. So when making relatively balanced trades of modeling complexity for translation complexity, the overall difficulty of deploying with the method decreases. In this particular case I don;t see much of an increase in the design - as Bob points out you already have the general primitives in place to solve the problem. >From Pathfinder experience with our clients we believe that the OOA/RD method needs to become less complex to learn and deploy successfully. This approach must be reasonably easy to understand for the "average" analyst/developer, and still retain the effectiveness of OOA/RD today. As we move OOA/RD forward to the adoption of the UML notation, we see a few areas where we can simplify certain aspects of the approach and give up very few benefits. The m:m:m is certainly one of them. We believe this simplicity is readily achievable, and the resulting loss of rigor and/or expressiveness is minimal. If we achieve this simplicity, and are therefore better able to leverage the maturity and success of this approach into the UML world, then this email forum could become a much busier and more valuable avenue. If we clutch to unneeded complexity for the sake of continuity and nostalgia, then this group will quickly dwindle to a social gathering of technological dinoasurs. Who will we help if we can't get anyone else to follow our lofty and emminently defensible technical positions? Ed - I hope you don't think I'm teeing up on you specifically. You have offered a valid position on the conciseness and readability of the m:m:m, just as Steve Mellor asked. _______________________________________________________ Pathfinder Solutions Inc. www.pathfindersol.com | 888-OOA-PATH | | effective solutions for software engineering challenges| | Peter Fontana voice: +01 508-384-1392 | peterf@pathfindersol.com fax: +01 508-384-7906 | _______________________________________________________| Allen Theobald writes to shlaer-mellor-users: -------------------------------------------------------------------- I haven't read the monthly administrative posting in a while. I was wondering if asking about kickstarting an object blitz would be appropriate to ask on this list. If it is, I will post some overview infomation and ask for quidance. If not, well then I won't. :^) Kind Regards, Allen Theobald "Lynch, Chris D. SDX" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Fontana: >As we move OOA/RD forward to the adoption of the UML notation, we see a few >areas where we can simplify certain aspects of the approach and give up very >few benefits. The m:m:m is certainly one of them. We believe this >simplicity is readily achievable, and the resulting loss of rigor and/or >expressiveness is minimal. I don't understand why you say "loss of rigor". Can you be more specific? Thanks, -Chris ------------------------------------------- Chris Lynch Abbott Ambulatory Infusion Systems San Diego, Ca LYNCHCD@HPD.ABBOTT.COM ------------------------------------------- peterf@pathfindersol.com (Peter J. Fontana) writes to shlaer-mellor-users: -------------------------------------------------------------------- At 11:17 AM 3/26/99 -0600, shlaer-mellor-users@projtech.com wrote: >"Lynch, Chris D. SDX" writes to shlaer-mellor-users: >-------------------------------------------------------------------- >>... The m:m:m is certainly one of them. We believe this >>simplicity is readily achievable, and the resulting loss of rigor and/or >>expressiveness is minimal. >I don't understand why you say "loss of rigor". >Can you be more specific? In this particular case you may be able to defend that the m:m:1->m is just as rigorous as m:m:m - the comment was just to defer any subjective reaction. There are of course more things that can be done to "streamline" OOA/RD, each with varying balances of simplicity/rigor. _______________________________________________________ Pathfinder Solutions Inc. www.pathfindersol.com | 888-OOA-PATH | | effective solutions for software engineering challenges| | Peter Fontana voice: +01 508-384-1392 | peterf@pathfindersol.com fax: +01 508-384-7906 | _______________________________________________________| Don White writes to shlaer-mellor-users: -------------------------------------------------------------------- To all: I have been a long time lurker because I am not able to use SM on any current projects, but my belief and interest in SM is still strong. It has been quite some time since I took SM training. I'm sure this is going to be a very obvious question, but ... But why is there a distinction between 1. m:m:m, and 2. m:m followed by m:m? m:m:m seems to me to be contrary to the spirit of SM since the information model did not seem to relate more than two object types at once. Don PS. Hi H.S. At 02:12 PM 3/26/99 -0500, you wrote: >peterf@pathfindersol.com (Peter J. Fontana) writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >At 11:17 AM 3/26/99 -0600, shlaer-mellor-users@projtech.com wrote: >>"Lynch, Chris D. SDX" writes to shlaer-mellor-users: >>-------------------------------------------------------------------- >>>... The m:m:m is certainly one of them. We believe this >>>simplicity is readily achievable, and the resulting loss of rigor and/or >>>expressiveness is minimal. > >>I don't understand why you say "loss of rigor". >>Can you be more specific? > >In this particular case you may be able to defend that the m:m:1->m is just >as rigorous as m:m:m - the comment was just to defer any subjective reaction. > >There are of course more things that can be done to "streamline" OOA/RD, >each with varying balances of simplicity/rigor. >_______________________________________________________ > Pathfinder Solutions Inc. www.pathfindersol.com | > 888-OOA-PATH | > | >effective solutions for software engineering challenges| > | > Peter Fontana voice: +01 508-384-1392 | > peterf@pathfindersol.com fax: +01 508-384-7906 | >_______________________________________________________| > > > "Whipp David (SMI)" writes to shlaer-mellor-users: -------------------------------------------------------------------- Don White wrote: > But why is there a distinction between 1. m:m:m, and 2. m:m > followed by m:m? > m:m:m seems to me to be contrary to the spirit of SM since > the information > model did not seem to relate more than two object types at once. m:m:m is an unclear way to express the construct. It is better written as M-(M:M) An associative object is one that describes a relationship. For example, you might have an object "Fisherman" (sorry, "fisherperson") and "lake". The name of the lake is not an attribute of the person; and the name of the fisherman is not an attribute of the lake. Instead, you can have a third object, "Fishing Permit" that describes the relatinship. Its itentifier is formed by combining the lake name and the fisherman's name. i.e. PERMIT(*lake_id (R1), *person_id (R1), issue_date, expiry_date) It is important to note that the relationship is formalised by the identifiers of the asociative object. Also note that the object can have attributes. This is a unary associate object. Given any pair of lake and person, The will be, at most, one instance of the permit. The question of this thread is: is it reasonable to have a model where there can be more than one instance of PERMIT for each pair of related fisherman and lake. To put it another way, can the object have an additional identifying attribute (e.g. can the issue date be a third identifier). Dave. -- Dave Whipp, Senior Verification Engineer Siemens Microelectronics Inc. San Jose, CA 95112 tel. (408) 501 6695. mailto:david.whipp@smi.siemens.com Opinions are my own, factual statements may be in error "Todd Cooper" writes to shlaer-mellor-users: -------------------------------------------------------------------- > Instead, you can have a third object, "Fishing Permit" that > describes the relatinship. Its itentifier is formed by > combining the lake name and the fisherman's name. i.e. > > PERMIT(*lake_id (R1), *person_id (R1), issue_date, expiry_date) > > It is important to note that the relationship is formalised by > the identifiers of the asociative object. Also note that the > object can have attributes. > > This is a unary associate object. Given any pair of lake and > person, The will be, at most, one instance of the permit. The > question of this thread is: is it reasonable to have a model > where there can be more than one instance of PERMIT for each > pair of related fisherman and lake. > > To put it another way, can the object have an additional > identifying attribute (e.g. can the issue date be a third > identifier). Perhaps a clearer example would be CaughtFish as the associative object: CaughtFish( *lake_id (R1), *person_id (R1), *timeFishCaught, length, weight, ... ) I have found the M-(M:M) formalism very useful and would vote to keep it. -Todd Daniel Dearing writes to shlaer-mellor-users: -------------------------------------------------------------------- Allen, I don't see how it would do any harm to INVITE people to contribute and then mail anyone who replies with your background info. I would be happy to review your background info. (can't guarantee any intelligent output though!) Dan :o) >>> Allen Theobald 26/03/99 16:47:36 >>> Allen Theobald writes to shlaer-mellor-users: -------------------------------------------------------------------- I haven't read the monthly administrative posting in a while. I was wondering if asking about kickstarting an object blitz would be appropriate to ask on this list. If it is, I will post some overview infomation and ask for quidance. If not, well then I won't. :^) Kind Regards, Allen Theobald Allen Theobald writes to shlaer-mellor-users: -------------------------------------------------------------------- Allen Theobald wrote: > I was wondering if asking about kickstarting an object blitz... Thanks Steve and Dan. It doesn't seem too out-of-order to post background info and invite comment; so here it is... :^) The odd part about this project is that there are no formal requirements to speak of. It exists as a concept from a proposal which I can freely develop any way I want to go. The "amorphousness of an open-ended problem" can be a problem itself, though. Without a well-defined problem to start from, one always faces a certain amount of vertigo. Kind Regards, Allen ----------8<---------- cut here ----------8<---------- There are two users: an analyst who uses the tool, and an end-user who uses the executable (or script) the analyst provides to compress / decompress the files. The Strand Authoring Tool (SAT) provides a GUI interface for the compression engine developer (Analyst) to implement the Strand Compression Algorithm. The Strand Compression Algorithm is applicable to the compression of any composite type data file. This includes files derived from collection systems (SDDC, MDDC, NSA), medical imagery systems, or general multimedia file formats. The Strand Authoring Tool will aid the analyst in defining Strands, allow the analyst to associate a (plug-in) compression engines (lossy or lossless) with the Strand, examine the results of the algorithm (plug-in analysis engine), and output compression / decompression C(?) source. In a very general sense the Strand Authoring Tool has only two requirements: R1. The analyst wants to be able to make source code to perform compression / decompression according to a custom algorithm that he chooses, which is tuned to one or more Strands in a file format that he defines. R2. The source code, when compiled and executed, must compress and decompress instances of the specified files. In a nutshell the Strand Authoring Tool will: 1. allow the analyst too manually parse, or identify Strands within a file, 2. allow the analyst to associate compression algorithms (lossy or lossless) with each Strand, 3. compresses the input file (using the associations) to produce an output file, 4. examine (analyze) the results of the compression, if they are satisfactory go on to step 5, otherwise go back to 1 (change the parsing) or 2 (change the compression engine association), 5. and output C(?) source code that, when compiled and executed, must compress / decompress instances of the specified file, "Lynch, Chris D. SDX" writes to shlaer-mellor-users: -------------------------------------------------------------------- Not to be a wet blanket, but... Now that I've seen Allen Theobald's object blitz subject, (the Strand Authoring Tool) I think I've seen a good example of a problem which does not fit well with OOA. However, "OOA" might work in a method where the "objects" are allowed to be arbitrary processes. -Chris > -----Original Message----- > From: Allen Theobald [SMTP:theobaam@email.uc.edu] > Sent: Monday, March 29, 1999 6:58 AM > To: shlaer-mellor-users@projtech.com > Subject: Re: (SMU) OIM question appropriate? > > Allen Theobald writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > > > Allen Theobald wrote: > > > I was wondering if asking about kickstarting an object blitz... > > Thanks Steve and Dan. It doesn't seem too out-of-order to post > background > info and invite comment; so here it is... :^) > > The odd part about this project is that there are no formal > requirements to speak of. It exists as a concept from a proposal > which I can freely develop any way I want to go. The "amorphousness > of an open-ended problem" can be a problem itself, though. Without a > well-defined problem to start from, one always faces a certain amount > of vertigo. > > Kind Regards, > > Allen > > ----------8<---------- cut here ----------8<---------- > > There are two users: an analyst who uses the tool, and an end-user who > uses the executable (or script) the analyst provides to compress / > decompress the files. > > The Strand Authoring Tool (SAT) provides a GUI interface for the > compression engine developer (Analyst) to implement the Strand > Compression Algorithm. The Strand Compression Algorithm is applicable > to the compression of any composite type data file. This includes > files derived from collection systems (SDDC, MDDC, NSA), medical > imagery systems, or general multimedia file formats. > > The Strand Authoring Tool will aid the analyst in defining Strands, allow > the analyst to associate (plug-in) compression engines (lossy or > lossless) with the Strand, examine the results of the algorithm > (plug-in analysis engine), and output compression / decompression C(?) > source. > > In a very general sense the Strand Authoring Tool has only two > requirements: > > R1. The analyst wants to be able to make source code to perform > compression / decompression according to a custom algorithm > that he chooses, which is tuned to one or more Strands in a > file format that he defines. > > R2. The source code, when compiled and executed, must compress and > decompress instances of the specified files. > > In a nutshell the Strand Authoring Tool will: > > 1. allow the analyst too manually parse, or identify Strands within a > file, > > 2. allow the analyst to associate compression algorithms (lossy or > lossless) with each Strand, > > 3. compresses the input file (using the associations) to produce an > output file, > > 4. examine (analyze) the results of the compression, if they are > satisfactory go on to step 5, otherwise go back to 1 (change the > parsing) or 2 (change the compression engine association), > > 5. and output C(?) source code that, when compiled and executed, must > compress / decompress instances of the specified file, > peterf@pathfindersol.com (Peter J. Fontana) writes to shlaer-mellor-users: -------------------------------------------------------------------- At 01:03 PM 3/29/99 -0600, shlaer-mellor-users@projtech.com wrote: >"Lynch, Chris D. SDX" writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >Not to be a wet blanket, but... > >Now that I've seen Allen Theobald's object blitz subject, >(the Strand Authoring Tool) I think I've seen a good >example of a problem which does not fit well with OOA. Certainly there are parts of the problem that might best be allocated to realized domains, but I have to disagree with your overall assessment. I think OOA is a good choice. The key is to recognize problem aspects that are well described by OOA, and those that aren't: Domain Modeling. _______________________________________________________ Pathfinder Solutions Inc. www.pathfindersol.com | 888-OOA-PATH | | effective solutions for software engineering challenges| | Peter Fontana voice: +01 508-384-1392 | peterf@pathfindersol.com fax: +01 508-384-7906 | _______________________________________________________| Don White writes to shlaer-mellor-users: -------------------------------------------------------------------- Thanks for the reset. Now when I look at Steves' starting message for this thread, I see what he meant. At 07:14 PM 3/26/99 -0800, you wrote: >"Whipp David (SMI)" writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >Don White wrote: > >... Given any pair of lake and >person, The will be, at most, one instance of the permit. The >question of this thread is: is it reasonable to have a model >where there can be more than one instance of PERMIT for each >pair of related fisherman and lake. M-(M:M) vs. 1-(M:M)? I could see that a fishing permit might require additional 'stickers' for different types of fish, or a hammers' relationship to the nail might be pounding or pulling. But these are still unary relationships with additional attributes. ie fishtype or poundingflag. (Aren't additional attributes the (?only?) reason to HAVE an associative object.) The only way I can imagine having more than one relationship at a time for any two instances of objects is if the instances can be in more than one state at the same time. Wouldn't this indicate a bad model? I think I understand why Steve was asking about the need for "many to something to something". So Todd, do you have any examples that would better illustrate the use of multiple relationships (or am I still not 'getting it')? > >To put it another way, can the object have an additional >identifying attribute (e.g. can the issue date be a third >identifier). The date would be a property of the association (not an ID) and is only important for the lifecycle of the relationship. >Dave. > >-- >Dave Whipp, Senior Verification Engineer >Siemens Microelectronics Inc. San Jose, CA 95112 >tel. (408) 501 6695. mailto:david.whipp@smi.siemens.com >Opinions are my own, factual statements may be in error Don W. Allen Theobald writes to shlaer-mellor-users: -------------------------------------------------------------------- Peter J. Fontana wrote: > The key is to recognize problem aspects that are well described by OOA, and > those that aren't: Domain Modeling. Hi all! Thinking about this last night I came up with the following objects of the application domain. I know 'Analyst' and 'End User' aren't objects in the application domain, but it helps communicate concepts. There may be a need for Specification objects, especially for Strand, CompDecomp Engine, and Analyis Engine. Objects: o Analyst. o End user. o File. o Strand. o CompDecomp Engine. o Analysis Engine. Associations: Associations are critical, but finding objects is much more important than finding associations right now. Analyst Scenario 1 -- Compressing a file: o Analyst selects uncompressed File. o Analyst identifies Strands in File. o Analyst parses file into Strands. o Analyst associates CompDecomp Engine to each Strand. o Analyst provides Engine specific input parameters (if any). o Analyst compresses Strands. o Analyst evaluates results. o Analyst selects analysis criteria (Analysis Engine), and evaluates the results of the compression. If they are not satisfactory re-parse or re-associate, otherwise o Analyst outputs C(?) source code that, when compiled and executed, must compress / decompress instances of the specified file. Analyst Scenario 2 -- Editing definitions: o Analyst edits previously defined Strands for a File (type?). o Analyst edits previously defined associations for a File (type?). o Analyst outputs C(?) source code that, when compiled and executed, must compress / decompress instances of the specified File. End User Scenario 1 -- Compressing a file: o End User takes source code generated by Analyst; compiles it. o End User excutes it to compress File. o End User excutes it to uncompress a previously Strand-compressed File. ----------8<---------- cut here ----------8<---------- There are two users: an analyst who uses the tool, and an end-user who uses the executable (or script) the analyst provides to compress / decompress the files. The Strand Authoring Tool (SAT) provides a GUI interface for the compression engine developer (Analyst) to implement the Strand Compression Algorithm. The Strand Compression Algorithm is applicable to the compression of any composite type data file. This includes files derived from collection systems (SDDC, MDDC, NSA), medical imagery systems, or general multimedia file formats. The Strand Authoring Tool will aid the analyst in defining Strands, allow the analyst to associate a (plug-in) compression engines (lossy or lossless) with the Strand, examine the results of the algorithm (plug-in analysis engine), and output compression / decompression C(?) source. In a very general sense the Strand Authoring Tool has only two requirements (R1 for the Analyst and R2 for the End User): R1. The analyst wants to be able to make source code to perform compression / decompression according to a custom algorithm that he chooses, which is tuned to one or more Strands in a file format that he defines. R2. The source code, when compiled and executed, must compress and decompress instances of the specified files Don White writes to shlaer-mellor-users: -------------------------------------------------------------------- IMHO ANY problem domain of sufficient complexity is appropriate for OOA. (if it is too simple, it doesn't warrant the overhead) I think the problem here is the same one I suffer from all the time. I am a nuts and bolts person. I like to get right into implementation. Shlaer Mellor is intended to structure the problem solving process by first identifying the domains. I haven't seen that done here yet. The analyst and end user COULD be modelled AS a domain, but do not seem appropiate choices for members of this problem domain. Wouldn't the interesting domain here be a strand domain? The mission statement as I understand it is - Create interactive custom source code for the compression/ decompression of aggregate data members. The objects in a strand domain might be: * file - generates strands * strand - object variations compressed/uncompressed - takes bridge controls to output data ie compressed(Y/N), compression type(?), compression(%size reduction), ... - and take commands to compress/decompress compression type etc. * analyzer * decompressor * compressor I haven't attempted more than an intuitive application of the questions that should be asked for each object in the domain. That should probably be the next focus of this thread. The analyst (or end user) OPERATIONS might be bridges to/from the GUI domain. At 03:27 PM 3/30/99 -0500, you wrote: >Allen Theobald writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >Peter J. Fontana wrote: > >> The key is to recognize problem aspects that are well described by OOA, and >> those that aren't: Domain Modeling. > >Hi all! > >Thinking about this last night I came up with the following objects of >the application domain. I know 'Analyst' and 'End User' aren't >objects in the application domain, but it helps communicate concepts. > >There may be a need for Specification objects, especially for Strand, >CompDecomp Engine, and Analyis Engine. > >Objects: > > o Analyst. > > o End user. > > o File. > > o Strand. > > o CompDecomp Engine. > > o Analysis Engine. > > >Associations: > > Associations are critical, but finding objects is much more > important than finding associations right now. > > >Analyst Scenario 1 -- Compressing a file: > > o Analyst selects uncompressed File. > > o Analyst identifies Strands in File. > > o Analyst parses file into Strands. > > o Analyst associates CompDecomp Engine to each Strand. > > o Analyst provides Engine specific input parameters (if any). > > o Analyst compresses Strands. > > o Analyst evaluates results. > > o Analyst selects analysis criteria (Analysis Engine), and > evaluates the results of the compression. If they are not > satisfactory re-parse or re-associate, otherwise > > o Analyst outputs C(?) source code that, when compiled and executed, must > compress / decompress instances of the specified file. > > > >Analyst Scenario 2 -- Editing definitions: > > o Analyst edits previously defined Strands for a File (type?). > > o Analyst edits previously defined associations for a File (type?). > > o Analyst outputs C(?) source code that, when compiled and executed, must > compress / decompress instances of the specified File. > > > >End User Scenario 1 -- Compressing a file: > > o End User takes source code generated by Analyst; compiles it. > > o End User excutes it to compress File. > > o End User excutes it to uncompress a previously Strand-compressed > File. > > >----------8<---------- cut here ----------8<---------- > >There are two users: an analyst who uses the tool, and an end-user who >uses the executable (or script) the analyst provides to compress / >decompress the files. > >The Strand Authoring Tool (SAT) provides a GUI interface for the >compression engine developer (Analyst) to implement the Strand >Compression Algorithm. The Strand Compression Algorithm is applicable >to the compression of any composite type data file. This includes >files derived from collection systems (SDDC, MDDC, NSA), medical >imagery systems, or general multimedia file formats. > >The Strand Authoring Tool will aid the analyst in defining Strands, allow > >the analyst to associate a (plug-in) compression engines (lossy or >lossless) with the Strand, examine the results of the algorithm >(plug-in analysis engine), and output compression / decompression C(?) >source. > >In a very general sense the Strand Authoring Tool has only two >requirements (R1 for the Analyst and R2 for the End User): > > R1. The analyst wants to be able to make source code to perform > compression / decompression according to a custom algorithm > that he chooses, which is tuned to one or more Strands in a > file format that he defines. > > R2. The source code, when compiled and executed, must compress and > decompress instances of the specified files > > > "Lynch, Chris D. SDX" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to White and Theobald: --------------------------------- I think Strand is an object rather than a domain. Because the analyst parses out strands from a file for compression, I assume the structure of the strand is a determining factor in the choice of suitable compression algorithms. For instance: UncompressedStrand isa UncompressedRasterStrand or UncompressedVectorStrand or UncompressedPolygonStrand or UncompressedJPEGStrand or ... UncompressedRasterStrand is composed of UncompressedRasterStrandImageCaption and One or more UncompressedRasterLine UncompressedRasterLine is composed of UncompressedLineID and One or more UncompressedPixel UncompressedPixel is composed or UncompressedPixelId UncompressedPixelColor (24 bits) etc. Note the distinction between uncompressed and compressed. A compression algorithm might compress some parts of the strand and not others, e.g. the image caption might remain uncompressed. Note also that it describes the file logically rather than how it is stored. I chose device-specific "formats" to illustrate a balance between ideal data representation and the requirements of particular devices. Is this reasonable? I am intrigued by the idea of the compressed file being in a separate domain. This would allow the strand domain to describe only uncompressed strands, pushing the idea of compression into the bridge. But then it would seem that the analyst's task of designing algorithms would be an exercise in bridge specification and design, an area where the SMOOA method seems to discourage complex algorithms. An alternative for specifying compression (IMHO the most difficult aspect of this problem) is to have the analyst specify the IM for the compressed versions of Raster, Vector, etc. (in the strand domain), and construct state- and process-models to translate from one to the other. This might involve the compression analyst (user #1) modeling temporary objects such as the compression dictionary used by the LZ compression algorithm. Questions for Allen Theobald: Are there any gross conceptual errors here? Is this a meaningful direction to pursue (i.e., detailing the structure of each type of image?) Also, is the system supposed to be able to allow the analyst to work with new and arbitrary strand formats, or can all the strand structures be enumerated during the analysis? Another clarification: I assume the compressed file contains multiple compressed strands. Is this correct? Regards, -Chris Allen Theobald writes to shlaer-mellor-users: -------------------------------------------------------------------- > Are there any gross conceptual errors here? Nope! > ...is the system supposed to be able to allow the analyst to work > with new and arbitrary strand formats, or can all the strand > structures be enumerated during the analysis? The analyst is the free creator of strand definitions, since that's a lot more useful. Right here is where the limits of the Strand Authoring Tool will be defined: the less complexity you allow in strand definitions, the more types of files will be unparsable by the system. The full set of file types that the tool is able to compress and decompress is implicitly defined by the set of all possible strand definitions expressible in the strand definition "language" above. It would be useful to list a set of known, existing file types that the tool can handle, without limiting the tool to those file types. > Another clarification: I assume the compressed file contains > multiple compressed strands. Is this correct? Yes. I think. :^) For example (made up), say I have the following format of input data (fairly typical): HEADER SAMPLE 1 ... SAMPLE 8 HEADER SAMPLE 1 ... SAMPLE 16 . . The header can be uniquely identified within the data stream. The header 'data format' also contains the number of samples following the header. I would want to parse this file to produce 17 files. One containing headers only, one containing sample 1 only, etc... Then compress, and then combine to produce one file. So, all the HEADERS would be a strand. All SAMPLE 1's would be a strand, etc. Does that answer your question? I guess each SAMPLE 1 in an input file could be considered a strand, but it seems the overhead to manage it would outweigh the benefits. Regards, Allen Don White writes to shlaer-mellor-users: -------------------------------------------------------------------- Okay. It seemed reasonable to me that strand was BOTH a domain AND an object. But please, before we continue, define your domain. Possibly a FILE domain? What would make sense to contain what you consider to be the true problem? (ie the compression?) Also, the isa's make sense to a point. Do you REALLY want to model pixels? I have thought about this point quite a few times. When do you you stop modelling? The bicycle? the wheel? the spokes? the fittings? the molecules? the atoms? the quanta? ... Without a definition for the domain, I guess anything COULD be an object. I would create a definition of a 'strand element' that would be the smallest unit to be compressed. You generally can't compress a pixel. Depending on your compression technique your strand element may be the whole strand (for pattern libraries) or individual lines/buffers (for run-length compression). My assumption was that each algorithm would be represented as a seperate compressor object or uncompressor object that would work on a line or arbitrarily sized buffer. Don W. At 11:02 AM 3/31/99 -0600, you wrote: >"Lynch, Chris D. SDX" writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >Responding to White and Theobald: >--------------------------------- > >I think Strand is an object rather than a domain. ... >UncompressedRasterLine is composed of > UncompressedLineID and > One or more UncompressedPixel > >UncompressedPixel is composed or > UncompressedPixelId > UncompressedPixelColor (24 bits) > > >etc. > >Note the distinction between uncompressed >and compressed. A compression algorithm might >compress some parts of the strand and not others, >e.g. the image caption might remain uncompressed. >Note also that it describes the file logically >rather than how it is stored. I chose device-specific >"formats" to illustrate a balance between ideal >data representation and the requirements of >particular devices. Is this reasonable? > >I am intrigued by the idea of the compressed >file being in a separate domain. This would allow >the strand domain to describe only uncompressed >strands, pushing the idea of compression into >the bridge. But then it would seem that the analyst's >task of designing algorithms would be an exercise >in bridge specification and design, an area where >the SMOOA method seems to discourage complex >algorithms. > >An alternative for specifying compression (IMHO the >most difficult aspect of this problem) is to >have the analyst specify the IM for the compressed >versions of Raster, Vector, etc. (in the strand >domain), and construct state- and process-models >to translate from one to the other. >This might involve the compression analyst (user #1) >modeling temporary objects such as the compression >dictionary used by the LZ compression algorithm. > > >Questions for Allen Theobald: > >Are there any gross conceptual errors here? >Is this a meaningful direction to pursue (i.e., detailing >the structure of each type of image?) Also, >is the system supposed to be able to allow the analyst to work >with new and arbitrary strand formats, or can all the strand >structures be enumerated during the analysis? > >Another clarification: I assume the compressed file contains >multiple compressed strands. Is this correct? > >Regards, > >-Chris > > > "Lynch, Chris D. SDX" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to White: --------------------- >Okay. It seemed reasonable to me that strand was BOTH a domain AND >an object. But please, before we continue, define your domain. >Possibly a FILE domain? What would make sense to contain what you >consider to be the true problem? (ie the compression?) In my example the application domain is ImageCompression. In it an image is an object with a complex structure dependent on its associated UncompressedImageType. Each UncompressedImageType is related to one or more CompressionAlgorithms, which relate uncompressedImage types to CompressedImageTypes. I think I want conversion to/from files to take place in a bridge, with files being a separate domain. Strands seem like aggregations of images of like type. >Also, the isa's make sense to a point. Do you REALLY want to model >pixels? I have thought about this point quite a few times. When do you >you stop modelling? The bicycle? the wheel? the spokes? the fittings? >the molecules? the atoms? the quanta? ... Without a definition for the >domain, I guess anything COULD be an object. >I would create a definition of a 'strand element' that would be the smallest >unit to be compressed. You generally can't compress a pixel. Depending on your >compression technique your strand element may be the whole strand (for pattern >libraries) or individual lines/buffers (for run-length compression). My example was partially motivated by the desire to show how compression schemes can be data-dependent. E.g., for rasterImage, there's an implied restriction that the row must be completely self-contained, so row would be the 'strand element' you refer to. In a vector file, it might be the vector. To address your point on pixels: if lossy compression is acceptable, why not reduce the pixels? There's a lot of compressability there. Also, SMOOA requires normalization down to that level, because a row object cannot contain a variable number of pixels. (You can see why I don't like this application for SMOOA!) >My assumption was that each algorithm would be represented as a seperate >compressor object or uncompressor object that would work on a line or >arbitrarily sized buffer. I was under the impression that Allen wanted to define compression in a data-sensitive way and thus would not be interested in compressing generic sea-of-bytes buffers. But maybe I'm wrong... (Allen: BTW, you *do* realize there are reasonably priced compression libraries out there? :-) > Another aside: this domain seems to strongly resemble language translation. But all this is easy speculation by the guy who doesn't have to do the work. Best of luck, Allen! -Chris Allen Theobald writes to shlaer-mellor-users: -------------------------------------------------------------------- Don White writes... > Shlaer Mellor is intended to structure the problem solving process > by first identifying the domains. I haven't seen that done here yet. And later says... > But please, before we continue, define your domain. Possibly a FILE > domain? What would make sense to contain what you consider to be the > true problem? (ie the compression?) I never realized that it was all that cut-and-dry. I always viewed domain modelling as I do writing: come up with the contents first (write), then generate the table of contents (organize). In that regard, I object blitz first (write), then come up with the domains (organize). At this point I recursively object blitz the domains seperately. These are all valid points, but the tone suggests you are slightly irritated. Believe me, my intention is not to irritate n-thousand ESMUGGERS. Regards, Allen Theobald Don White writes to shlaer-mellor-users: -------------------------------------------------------------------- I apologize. I forgot that everyone organizes data differently. Problem was, I didn't understand that data was being collected for eventual organization into domains. I thought we were 'jumping the gun'. My intention was only to help. I'm not really irritated. Sorry to have given that impression. :) Don W. At 05:19 PM 3/31/99 -0500, you wrote: >Allen Theobald writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >Don White writes... >> Shlaer Mellor is intended to structure the problem solving process >> by first identifying the domains. I haven't seen that done here yet. > >And later says... >> But please, before we continue, define your domain. Possibly a FILE >> domain? What would make sense to contain what you consider to be the >> true problem? (ie the compression?) > >I never realized that it was all that cut-and-dry. I always viewed >domain modelling as I do writing: come up with the contents first >(write), then generate the table of contents (organize). In that >regard, I object blitz first (write), then come up with the domains >(organize). At this point I recursively object blitz the domains >seperately. > >These are all valid points, but the tone suggests you are slightly >irritated. Believe me, my intention is not to irritate n-thousand >ESMUGGERS. > >Regards, > >Allen Theobald > > > > "Whipp David (SMI)" writes to shlaer-mellor-users: -------------------------------------------------------------------- > Don White wrote: > > (Aren't additional attributes the (?only?) reason to HAVE an > associative object.) They are also used when you don't want to formalise a relationship in either peer. On an M:M relationship, this is mandatory. > The only way I can imagine having more than one > relationship at a time for any two instances of objects is if the > instances can be in more than one state at the same time. Wouldn't > this indicate a bad model? The important question is whther the two instances (of the relationship) are conceptually different in the problem domain. . > >To put it another way, can the object have an additional > >identifying attribute (e.g. can the issue date be a third > >identifier). > The date would be a property of the association (not an ID) and > is only important for the lifecycle of the relationship. It depends. Are the two permits, issued at differennt times, really the same permit. Imagine the situation where, just before your old permit expires, a new one is sent through the post. You now have two permits, each with a different issue date. Shortly thereafter, your old permit will transition to the "no longer valid" state and the new one will transition to the "valid" state. You may find a different way to model it, but the case can be made for the m-(m:m) relationship. Dave. Don White writes to shlaer-mellor-users: -------------------------------------------------------------------- At 07:16 PM 3/29/99 -0800, you wrote: >"Whipp David (SMI)" writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >> Don White wrote: >> >> (Aren't additional attributes the (?only?) reason to HAVE an >> associative object.) > >They are also used when you don't want to formalise a relationship >in either peer. On an M:M relationship, this is mandatory. Ah! Of course! >> The only way I can imagine having more than one >> relationship at a time for any two instances of objects is if the >> instances can be in more than one state at the same time. Wouldn't ... >It depends. Are the two permits, issued at differennt times, really >the same permit. Imagine the situation where, just before your old >permit expires, a new one is sent through the post. You now have >two permits, each with a different issue date. Shortly thereafter, >your old permit will transition to the "no longer valid" state and the >new one will transition to the "valid" state. Good example! Either permit could be used, so the system should accept both. Put another way, any relationship that needs to be renewed without breaking the connection will have to support temporary overlapping relationships. >You may find a different way to model it, but the case can be made >for the m-(m:m) relationship. As you say, it could probably be modelled some other way. But, I always like a model to be as close as possible to the thing being modelled. This permit example sounds like a pretty clear cut case. > > >Dave. Don W. Don White writes to shlaer-mellor-users: -------------------------------------------------------------------- Chris, At 03:37 PM 3/31/99 -0600, you wrote: >"Lynch, Chris D. SDX" writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >Responding to White: >--------------------- ... >In my example the application domain is ImageCompression. Cool. ... >I think I want conversion to/from files to take place >in a bridge, with files being a separate domain. Is a strand necessarily an image? ... >>unit to be compressed. You generally can't compress a pixel. Depending on >your >>compression technique your strand element may be the whole strand (for >pattern >>libraries) or individual lines/buffers (for run-length compression). > >My example was partially motivated by the desire to show >how compression schemes can be data-dependent. E.g., for >rasterImage, there's an implied restriction that the row >must be completely self-contained, so row would be the >'strand element' you refer to. In a vector file, it might >be the vector. Makes sense. >To address your point on pixels: if lossy compression >is acceptable, why not reduce the pixels? There's a Isn't lossy compression based on patterned replacement of groups of pixels? >lot of compressability there. Also, SMOOA >requires normalization down to that level, because a row >object cannot contain a variable number of pixels. (You >can see why I don't like this application for SMOOA!) > >>My assumption was that each algorithm would be represented as a seperate >>compressor object or uncompressor object that would work on a line or >>arbitrarily sized buffer. > >I was under the impression that Allen wanted to define >compression in a data-sensitive way and thus would not >be interested in compressing generic sea-of-bytes buffers. >But maybe I'm wrong... > >(Allen: BTW, you *do* realize there are reasonably priced >compression libraries out there? :-) > > >Another aside: this domain seems to strongly resemble >language translation. > >But all this is easy speculation by the guy who doesn't >have to do the work. > >Best of luck, Allen! > >-Chris > > "Todd Cooper" writes to shlaer-mellor-users: -------------------------------------------------------------------- > > >To put it another way, can the object have an additional > > >identifying attribute (e.g. can the issue date be a third > > >identifier). > > > The date would be a property of the association (not an ID) and > > is only important for the lifecycle of the relationship. > > It depends. Are the two permits, issued at differennt times, really > the same permit. Imagine the situation where, just before your old > permit expires, a new one is sent through the post. You now have > two permits, each with a different issue date. Shortly thereafter, > your old permit will transition to the "no longer valid" state and the > new one will transition to the "valid" state. > > You may find a different way to model it, but the case can be made > for the m-(m:m) relationship. In my example of an associative object recording fish caught in a lake (e.g., as part of a fishing tournament), the time/date of the catch could act as a third identifier, unless you use the patented Cooper Method, in which case you'd be pullin' those puppies out of the water so quick that finding a time stamp with an appropriate granularity might prove a bit difficult. (-; In this case, the ol' counter approach would be best for the third identifier, though even then, it will probably be incrementing faster than the dollar digit on a This Sale display at a gas station in San Diego ... and these days, that sucker is flyin' by pretty fast! Additional descriptive attributes of this associative object might include fish length, weight, and type. Hey did I ever tell you about the one that ... well, I had to let it go anyway! -Todd 'archive.9904' -- Subject: Re: (SMU) simplifications "Dana Simonson" writes to shlaer-mellor-users: -------------------------------------------------------------------- View in a mono-space font: ------------ ----------------- | Lake | |Fisherman | | *name |<<------->>|*name | | location | ^ |lake fishing on| ------------ ^ ----------------- | | --------------- | Fish Caught | | *lake_name | | *fisherman | | *time_caught| | length | | weight | --------------- -or- --------------- | Fish Caught | | * DNA code | | lake_name | | fisherman | | time_caught | | length | | weight | --------------- ^ ^ | ^ | | ------------ | | ----------------- | Lake |<-| |->|Fisherman | | *name | |*name | | location |<<------->>|lake fishing on| ------------ ^ ----------------- | | | ---------------------- | Fishing Experience | | *lake_name | | *fisherman | ---------------------- As I assume Todd does, I like the first one a lot better....... Comments? <<<>>><<<>>><<<>>><<<>>><<<>>><<<>>><<<>>><<<>>> Dana Simonson Engineering Section Manager EF Johnson Radio Products Division Transcrypt International dsimonson@transcrypt.com www.transcrypt.com >>> "Todd Cooper" 03/31/99 08:36PM >>> "Todd Cooper" writes to shlaer-mellor-users: -------------------------------------------------------------------- > > >To put it another way, can the object have an additional > > >identifying attribute (e.g. can the issue date be a third > > >identifier). > > > The date would be a property of the association (not an ID) and > > is only important for the lifecycle of the relationship. > > It depends. Are the two permits, issued at differennt times, really > the same permit. Imagine the situation where, just before your old > permit expires, a new one is sent through the post. You now have > two permits, each with a different issue date. Shortly thereafter, > your old permit will transition to the "no longer valid" state and the > new one will transition to the "valid" state. > > You may find a different way to model it, but the case can be made > for the m-(m:m) relationship. In my example of an associative object recording fish caught in a lake (e.g., as part of a fishing tournament), the time/date of the catch could act as a third identifier, unless you use the patented Cooper Method, in which case you'd be pullin' those puppies out of the water so quick that finding a time stamp with an appropriate granularity might prove a bit difficult. (-; In this case, the ol' counter approach would be best for the third identifier, though even then, it will probably be incrementing faster than the dollar digit on a This Sale display at a gas station in San Diego ... and these days, that sucker is flyin' by pretty fast! Additional descriptive attributes of this associative object might include fish length, weight, and type. Hey did I ever tell you about the one that ... well, I had to let it go anyway! -Todd Subject: Re: (SMU) OIM question appropriate? (Strand object blitz) Allen Theobald writes to shlaer-mellor-users: -------------------------------------------------------------------- Sorry Don. Yesterday wasn't a good day for me. I Sometimes wish I would sleep on a post before sending it out. Chris says... > You can see why I don't like this application for SMOOA! Not to be blasphemous, but is their an OO it *is* suited for, or is this just not suited for OO at all? > I was under the impression that Allen wanted to define compression > in a data-sensitive way and thus would not be interested in > compressing generic sea-of-bytes buffers. Yes. Assume the "analyst" has intimate knowledge about the file he is trying to compress, and will make use of that knowledge when defining strands. > (Allen: BTW, you *do* realize there are reasonably priced > compression libraries out there? :-) > Yeah. But this tool is intended to work with large files (although not exclusively), sometimes extremely large (2-4 Gb), that are not well suited for compression by compress, pkzip, gzip, etc. Back to my illustration. Say I have the following format of input data (fairly typical): HEADER SAMPLE 1 ... SAMPLE 8 HEADER SAMPLE 1 ... SAMPLE 16 . . The header can be uniquely identified within the data stream. The header 'data format' also contains the number of samples following the header. I would want to parse this file to produce 17 files. One containing headers only, one containing sample 1 only, etc... Then compress, and then combine to produce one file. So, all the HEADERS would be a strand. All SAMPLE 1's would be a strand, etc. Now the analyst might find that RLE works best on the HEADERS, but Arithmetic works best on SAMPLE 1. So all the HEADERS are extracted out, combined, and then RLE compressed. All the SAMPLE 1s are extracted out, combined, and then Arithmetically compressed, etc. You get the picture. Regards, Allen Subject: RE: (SMU) OIM question appropriate? (Strand object blitz) "Lynch, Chris D. SDX" writes to shlaer-mellor-users: -------------------------------------------------------------------- > >Don White writes to shlaer-mellor-users: > >-------------------------------------------------------------------- > > > > >Is a strand necessarily an image? > Good question. I assumed that it was. > > > > >>To address your point on pixels: if lossy compression > >>is acceptable, why not reduce the pixels?... > >Isn't lossy compression based on patterned replacement of > >groups of pixels? > > Yes, but you need all the original pixels to start with. Recall that in my example the individual pixels are part of an > *uncompressed* image. The model of the *compressed* > image could represent the replacements > you refer to (by "super pixels", PixelBlocks, etc), and refrain from > showing the original level of detail. The idea is that one data structure > is being transformed into another, and both structures must be modeled. > > -Chris Subject: RE: (SMU) simplifications Don White writes to shlaer-mellor-users: -------------------------------------------------------------------- In my idealistic teenage brain (which I carry around in a forty year old head), I think of the objects in a domain as the star players and the relationships as support for the stars. In the cases of permits and of fish, the relationships appear to be the stars of the show. >Dave Whipp said >You may find a different way to model it, but the case can be made >for the m-(m:m) relationship. I wonder if there is a point of SM philosophy here. If a relation- ship is really the focus of a domain, shouldn't it be an object with constrained references rather than a relationship? ------------ fishes ----------------- | Lake | at has |Fisherman | | *name |<<------------>>|*name | | location | C |lake fishing on| ------------ ----------------- /\ comes out of /\ caught by | | | --------------- | | | Fish Caught | | | C| *lake_name |C | |-->>| *fisherman |<<-----| produces| *time_caught| catches | length | | weight | --------------- The same could be said of the permit. This leads me to wonder if m-(m:m) cases are an indication that domain members have not been completely identified. But, you can pound a nail with a wrench. If the nail goes in and the wrench isn't damaged who's to say it was the wrong thing to do. Idealistically yours, Don W. At 06:36 PM 3/31/99 -0800, you wrote: >"Todd Cooper" writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >> > >To put it another way, can the object have an additional >> > >identifying attribute (e.g. can the issue date be a third >> > >identifier). >> >> > The date would be a property of the association (not an ID) and >> > is only important for the lifecycle of the relationship. >> >> It depends. Are the two permits, issued at differennt times, really >> the same permit. Imagine the situation where, just before your old >> permit expires, a new one is sent through the post. You now have >> two permits, each with a different issue date. Shortly thereafter, >> your old permit will transition to the "no longer valid" state and the >> new one will transition to the "valid" state. >> >> You may find a different way to model it, but the case can be made >> for the m-(m:m) relationship. > >In my example of an associative object recording fish caught in a lake >(e.g., as part of a fishing tournament), the time/date of the catch could >act as a third identifier, unless you use the patented Cooper Method, in >which case you'd be pullin' those puppies out of the water so quick that >finding a time stamp with an appropriate granularity might prove a bit >difficult. (-; In this case, the ol' counter approach would be best for the >third identifier, though even then, it will probably be incrementing faster >than the dollar digit on a This Sale display at a gas station in San Diego >... and these days, that sucker is flyin' by pretty fast! > >Additional descriptive attributes of this associative object might include >fish length, weight, and type. > >Hey did I ever tell you about the one that ... well, I had to let it go >anyway! > >-Todd > > > Subject: Re: (SMU) OIM question appropriate? (Strand object blitz) Don White writes to shlaer-mellor-users: -------------------------------------------------------------------- At 10:05 AM 4/1/99 -0500, you wrote: >Allen Theobald writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >Sorry Don. Yesterday wasn't a good day for me. I Sometimes wish I would >sleep on a post before sending it out. No Problem. Keep smiling :) >Chris says... >> You can see why I don't like this application for SMOOA! > >Not to be blasphemous, but is their an OO it *is* suited for, or is >this just not suited for OO at all? I'm sounding like a rusty hinge here, but, I still don't see the unsuitability. I seeing a somewhat amorphous problem set because the user ("analyst") can define header and data, presumably at run time. But this could easily be handled by GUI interaction in combination with a library of known header/strand types. Before compression, data patterns could lead to suggested strand selection criteria. After compression, a generic strand header could contain any strand specific header data as well as the header data specific to the original file. >> I was under the impression that Allen wanted to define compression >> in a data-sensitive way and thus would not be interested in >> compressing generic sea-of-bytes buffers. > >Yes. Assume the "analyst" has intimate knowledge about the file he is >trying to compress, and will make use of that knowledge when defining >strands. So maybe a strand identifier object capable of taking GUI direction through a bridge? >> (Allen: BTW, you *do* realize there are reasonably priced >> compression libraries out there? :-) > > >Yeah. But this tool is intended to work with large files (although >not exclusively), sometimes extremely large (2-4 Gb), that are not >well suited for compression by compress, pkzip, gzip, etc. > >Back to my illustration. Say I have the following format of input >data (fairly typical): > > HEADER > SAMPLE 1 > ... > SAMPLE 8 > HEADER > SAMPLE 1 > ... > SAMPLE 16 > . > . > Sounds pretty straight-forward. It kind of reminds me of CHUNK formatting in .ILBM files from my old Amiga programming days. >The header can be uniquely identified within the data stream. The >header 'data format' also contains the number of samples following the >header. I would want to parse this file to produce 17 files. One >containing headers only, one containing sample 1 only, etc... Then >compress, and then combine to produce one file. > >So, all the HEADERS would be a strand. All SAMPLE 1's would be a >strand, etc. Now the analyst might find that RLE works best on the >HEADERS, but Arithmetic works best on SAMPLE 1. So all the HEADERS >are extracted out, combined, and then RLE compressed. All the SAMPLE >1s are extracted out, combined, and then Arithmetically compressed, >etc. You get the picture. Why would you seperate the HEADERS from the strands? That would make archiving less safe than if all decompression header data was stored with each strand. I would have made each HEADER/SAMPLE[1-n] into a strand. If you got only one strand file from a set, you could still successfully decode it. >Regards, > >Allen Hope your day was better today :) Don W. Subject: RE: (SMU) simplifications "Whipp David (SMI)" writes to shlaer-mellor-users: -------------------------------------------------------------------- > Don White writes to shlaer-mellor-users: > >Dave Whipp said > >You may find a different way to model it, but the case can be made > >for the m-(m:m) relationship. > > I wonder if there is a point of SM philosophy here. If a relation- > ship is really the focus of a domain, shouldn't it be an object > with constrained references rather than a relationship? > [diagram cut] > The same could be said of the permit. This leads me to wonder if > m-(m:m) cases are an indication that domain members have not been > completely identified. > (I'll ignore the fact that you still need an associative object for the M:M between Lake and Fisherman) For your relationships, I'd agree with your model, and use the additional relationships. But, if you rename "fishes at" to "catches fish at", then the powerful associative object seems more appropriate. Philosphically, I tend to equate relationships with their formalisation. For binary relationships, this means that the referential attributes *are* the relationship. For associative relationships, the whole associative object *is* the relationship. The lines on an OIM are redundant if attribute-domains are fully specified. I strongly dislike the link/unlink concepts used by most CASE tools (and SMALL). Dave. p.s. are there any lurkers in the SF Bay area? - Reply privately. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp. San Jose, CA 95112 tel. (408) 501 6695. mailto:david.whipp@infineon.com Opinions are my own, factual statements may be in error Subject: RE: (SMU) simplifications "Todd Cooper" writes to shlaer-mellor-users: -------------------------------------------------------------------- > Don White writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > > In my idealistic teenage brain (which I carry around in a forty > year old head), I think of the objects in a domain as the star > players Or was that Starr players? (A little SMM humor...I know, very little) > and the relationships as support for the stars. In the > cases of permits and of fish, the relationships appear to be the > stars of the show. Before we get too much further, though, just remember that the ONLY reason a relationship is formalized is so that the information in one object (including identifiers) can be accessed by another to solve the specific problem set for which the IM is being created in the first place. Sitting in an arm chair, many abstract relationships may be postulated based on the real-world context in which they are found but are rendered useless because they are not germane to the problem being solved. > > >Dave Whipp said > >You may find a different way to model it, but the case can be made > >for the m-(m:m) relationship. > > I wonder if there is a point of SM philosophy here. If a relation- > ship is really the focus of a domain, shouldn't it be an object > with constrained references rather than a relationship? > > > The same could be said of the permit. This leads me to wonder if > m-(m:m) cases are an indication that domain members have not been > completely identified. I think not. Instead, it clearly and concisely communicates the fact that this object exists solely because of the relationship between the guy and his lake. If our application also kept track of fish in the lake regardless of whether or not they were caught, then you would have grounds for a standalone fish object with separate relationships for "lives in" and "caught by". That brings to mind a funny model: ------------ fishes ----------------- | Lake | at has |Fisherman | | *location|<<------------------------------>>| *DNA | | name | R2 /\ | name | | toxicity | /\ C | poleSponsor | ------------ | | baitSponsor | /\ | | prizeDollars | | swims in | ----------------- | | | ------------------ --------------------- | | Happy Fish | | Sad Fish | | | *name (R3) | | *name (R3) | | | *location (R3) | | *location (R2,R3) | | | favorite_food | | *DNA (R2) | | | ageInFishYears | | time_caught | | | state | | state | | ------------------ --------------------- | | | | ----------------- | | | R3 | | --- | | | ------------------- | | Your Basic Fish | | | *name | |---------->>| *location(R1) | R1 C | length | | weight | ------------------- An M-(M:M)associative/migrating subtype object with a distributed state model! Note: For those of you wondering about the Sad Fish 'state': Oops, SpitItOut, SwimAway, SuckinWind, FLAP, FLAp, FLap, Flap, What'sThatBrightLightInTheDistance, ... And yes, they are both Born-n-Die lifecycles. -Todd Subject: (SMU) Object reuse lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Things have been quiet for a spell, so I guess it is time to wake everyone up. Some time ago I was in a debate with someone on another forum about the way to model a particular problem. Basically the guy had an Order System that processed POs and a Billing System that processed Invoices. These were separate subsystems that conform to our notion of domains. This was modeled as Order System (OS): Purchase Order (PO)<-------->> Order Line (OL) [PO Order Line (POOL)] Billing System (BS): Invoice (I) <-------->> Order Line (OL) [Invoice Order Line (IOL)] so that the Order Line object was being reused. (Ignore the stuff in brackets for now.) We have a methodological admonition that the same object should not appear in two domains. Different views of it are OK, but not the same abstraction. Therefore this use of Order Line would be illegal. So we would have to put Order Line in a third domain and have OS and BS access that domain as a service. All well and good, but just a tad clumsy and certainly a trial for the Architect to make those communications efficient. Q1: Is that admonition a warning or a statute? Certainly it should raise a warning flag that something *might* be awry with the level of abstraction of the domains and due attention should be brought to bear. My problem is that I don't think OS and BS are at different levels of abstraction. Different subject matters, Yes. But different levels of abstraction, I don't think so. Now if two domains really are at the same level of abstraction, why would it be a problem for them to have a couple of objects in common? We aren't talking instances here -- the instances would be physically different (in this case IOL would be created from data passed over the bridge from POOL) -- just the object characteristics. As it happens, in the debate I attacked the idea that the Order Lines were really the same. In fact they are very similar but there are some attributes that are different (e.g., a Backorder Flag in the Invoice version). So it turns out the OL had fields that were undefined in one domain or the other. Therefore they really should be PO Order Line and Invoice Order Line, as indicated in the brackets above. This superficially resolves the problem. Unfortunately one thing is still niggling me. If OS and BS were combined into a single domain I would not think twice about having a supertype, Order Line, with POOL and IOL as subtypes. My problem is that POOL and IOL share a common is-a ancestry. If I split the domains that doesn't change -- all I have done is to move the supertype's attributes into each of the POOL and IOL objects. It seems to me that commonality suggests that those objects *are* related despite being different in detail -- that is, they are the same to the extent that they share the is-a ancestry. Q2: Is this a legitimate concern or simply an attack of middle aged angst? -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: RE: (SMU) Object reuse "Whipp David (ITC)" writes to shlaer-mellor-users: -------------------------------------------------------------------- > Things have been quiet for a spell, so I guess it is time to > wake everyone up. > I was enjoying a nice nap and you had to go and wake me up. I'll have to find some reason to say that you're wrong ;-) > We have a methodological admonition that the same object > should not appear in two domains. Different views of it > are OK, but not the same abstraction. Therefore this use of > Order Line would be illegal. So we would have to put Order > Line in a third domain and have OS and BS access that domain > as a service. > Using a third domain would not help. Each domain that uses a concept must have a counterpart object that views the concept. So now you'd have an OL in all three domains! > Q1: Is that admonition a warning or a statute? Certainly > it should raise a warning flag that something *might* be > awry with the level of abstraction of the domains and due > attention should be brought to bear. > I'd treat it as a statute. But then I'd get a good lawyer :-) Seriously though, I frequently find that counterpart objects look very similar. Often you have too look at the state-actions to see the different. Both state models usually have a common skeleton, though some differences in details. > My problem is that I don't think OS and BS are at different > levels of abstraction. Different subject matters, Yes. But > different levels of abstraction, I don't think so. > I feel happier omitting the words "levels of". I tend to view a system as an n-dimensional space (usually n=3) with abstraction planes cutting accross it at all sorts of wierd angles. And the planes aren't always flat nor thin. > Now if two > domains really are at the same level of abstraction, why > would it be a problem for them to have a couple of objects > in common? We aren't talking instances here -- the > instances would be physically different (in this case IOL > would be created from data passed over the bridge from POOL) > -- just the object characteristics. > If two objects in different domains are different views on a common subject, then the two instances of the objects may be conceptually the same. It is quite common to connect domains using inheritence. [...] > Unfortunately one thing is still niggling me. If OS and BS > were combined into a single domain I would not think twice > about having a supertype, Order Line, with POOL and IOL as > subtypes. My problem is that POOL and IOL share a common > is-a ancestry. If I split the domains that doesn't change > -- all I have done is to move the supertype's attributes > into each of the POOL and IOL objects. It seems to me that > commonality suggests that those objects *are* related > despite being different in detail -- that is, they are the > same to the extent that they share the is-a ancestry. > I think we can agree that they are similar: there is a common concept that they represent. I don't like the shared supertype because it forces one instance to be 2 objects. You seem to have an is-a relationship that is not a supertype relationship. This would suggest a pair of peer domains. There are several ways of connecting domains. Sometimes, one domain is a specialisation of another, in which case inheritance is useful. Other times, the wiring loom metaphor is appropriate. 2 subject matter in the same abstraction are simply plugged together with function calls (adaptor pattern). A situation that I find interesting is where the two domains are like layers on a map. One layer describes the roads and another describes the hills. In this case, the cross-cutting techniques of AOP (aspect oriented programming) are useful. A slight variation is where the domains interact like coloured lights on a sheet of paper. I am coming to believe that AOP is a very useful way to think about domain interactions. AOP aspects and SM domains appear to be synonymous. But the bridging model of AOP, although immature, has much greater potential that the wormholes of SM. Well, that's enough changing the subject for one post. Dave. p.s. I've recently moved from Europe to Santa Clara, CA. Is there anyone local who'd like to go out for a drink? -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp. San Jose, CA 95112 tel. (408) 501 6695. mailto:david.whipp@infineon.com Opinions are my own, factual statements may be in error Subject: Re: (SMU) Object reuse lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > > So we would have to put Order > > Line in a third domain and have OS and BS access that domain > > as a service. > > > Using a third domain would not help. Each domain that uses a > concept must have a counterpart object that views the concept. > So now you'd have an OL in all three domains! Say, what? We do this all the time with implementation, architectural, and realized domains. In the client domain we invoke the services via transforms, etc. that map directly as bridges. > Seriously though, I frequently find that counterpart objects look > very similar. Often you have too look at the state-actions to > see the different. Both state models usually have a common > skeleton, though some differences in details. I think I need to understand your definition of a counterpart object. I have always thought of them as different abstractions or views of the same underlying problem space entity. The classic example from the training course is the Train object that is an icon {pixel locations, color, etc.} in the GUI domain while in another domain the abstraction may be more physical {number/type of cars, GPS location, etc.}. The problem with OL is that the abstractions are identical. > I feel happier omitting the words "levels of". I tend to view a > system as an n-dimensional space (usually n=3) with > abstraction planes cutting accross it at all sorts of wierd > angles. And the planes aren't always flat nor thin. Hmmm. We were taught that the domain chart corresponds to different levels of abstraction going from very high at the Application domain to very low at the architectural domains. However, I have always chosen to interpret that as one of several viewpoints. B-) When I say "levels of abstraction" I am simply trying to capture the idea that entire domains represent a consistent degree of abstraction internally but different domains provide different degrees of abstraction. Are you suggesting that a domain can have two or more of your planes pass through it? > If two objects in different domains are different views on a > common subject, then the two instances of the objects > may be conceptually the same. It is quite common to connect > domains using inheritence. Quite common for whom? B-) I have no idea how you would even do that in the OOA. Are you talking about architectural mechanisms? > I think we can agree that they are similar: there is a common > concept that they represent. I don't like the shared supertype > because it forces one instance to be 2 objects. You seem to > have an is-a relationship that is not a supertype relationship. > This would suggest a pair of peer domains. First, I think they are always separate instances. The IOL instance would be created later in time, for one thing. The normal procedure would be for whoever creates and Invoice to extract the shared data from the OS domain and then create the Invoice in the BS with its own OLs. As I indicated, it OS and BS were combined in a single domain, then one would not hesitate to subtype IOL and POOL from a common ancestor. In that case they would be separate instances. They remains separate instances if the domains are separated. Second, I think the is-a does exist implicitly -- that is what is causing my angst. It would certainly exist explicitly if the domains were combined; the domain model would not be properly normalized unless the common attributes were collected in a supertype. When the domains are separated, there is no need for the supertype because there is only one subtype represented in each domain. My argument is that the supertype is still there implicitly when one views the two domains at the same time. > There are several ways of connecting domains. Sometimes, > one domain is a specialisation of another, in which case > inheritance is useful. Other times, the wiring loom metaphor > is appropriate. 2 subject matter in the same abstraction are > simply plugged together with function calls (adaptor pattern). I assume you are talking about translation mechanisms here. My problem is with the OOA models. > A situation that I find interesting is where the two domains are > like layers on a map. One layer describes the roads and > another describes the hills. In this case, the cross-cutting > techniques of AOP (aspect oriented programming) are > useful. A slight variation is where the domains interact like > coloured lights on a sheet of paper. > > I am coming to believe that AOP is a very useful way to > think about domain interactions. AOP aspects and SM > domains appear to be synonymous. But the bridging > model of AOP, although immature, has much greater > potential that the wormholes of SM. Alas, I know nothing about AOP. I lead a sheltered life. > p.s. I've recently moved from Europe to Santa Clara, CA. Is there > anyone local who'd like to go out for a drink? Unfortunately, it's a long hop from Boston. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: RE: (SMU) Object reuse "Lynch, Chris D. SDX" writes to shlaer-mellor-users: -------------------------------------------------------------------- > lahman writes to shlaer-mellor-users: > -------------------------------------------------------------------- > >Some time ago I was in a debate with someone on another >forum about the way to model a particular problem. >Basically the guy had an Order System that processed POs and >a Billing System that processed Invoices. These were >separate subsystems that conform to our notion of domains. It seems to me they are the application domains of two distinct applications, which happen to have superficial resemblance in one small area. >As it happens, in the debate I attacked the idea that the >Order Lines were really the same. In fact they are very >similar but there are some attributes that are different >(e.g., a Backorder Flag in the Invoice version). So it >turns out the OL had fields that were undefined in one >domain or the other. Therefore they really should be PO >Order Line and Invoice Order Line, as indicated in the >brackets above. This superficially resolves the problem. Your resolution is not superficial, because the objects are more different than they are similar. I think your argument shows the benefits of analysis before programming. :-) The example attempt at reuse is a good illustration of breaking an important re-use rule: if you have to break the conceptual integrity of something to reuse it, it is not really reusable. Conceptually you are starting from scratch, even if you keep some of the code. To address your original question (of having one object in two domains as a form of reuse), I think it's an unbreakable rule, like the one against being in two places at one time. I would not do it on one domain chart, i.e., within one system. I might, however, reuse an object in another system, i.e., take object A (and its collaborators, if needed) from system 1 and reincarnate it/them in the corresponding domain of system 2, if I could do so without changing the concept of the objects involved. -Chris ------------------------------------------- Chris Lynch Abbott Ambulatory Infusion Systems San Diego, Ca LYNCHCD@HPD.ABBOTT.COM ------------------------------------------- Subject: Re: (SMU) Object reuse Don White writes to shlaer-mellor-users: -------------------------------------------------------------------- Awright, who hid the snooze button. Hi H.S., Can't definitively answer either question (too much middle-aged angst of my own), but, it really does sound like the domain should have been 'accounting'. With that aside, it sounds like the same INSTANCES are needed in each domain (excepting the addition of the BackOrder flag) which sounds to me like a domain identification problem. Don W. At 05:52 PM 4/14/99 -0400, you wrote: >lahman writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >Things have been quiet for a spell, so I guess it is time to >wake everyone up. > >Some time ago I was in a debate with someone on another >forum about the way to model a particular problem. >Basically the guy had an Order System that processed POs and >a Billing System that processed Invoices. These were >separate subsystems that conform to our notion of domains. >This was modeled as > >Order System (OS): > >Purchase Order (PO)<-------->> Order Line (OL) [PO Order >Line (POOL)] > >Billing System (BS): > >Invoice (I) <-------->> Order Line (OL) [Invoice Order >Line (IOL)] > >so that the Order Line object was being reused. (Ignore the >stuff in brackets for now.) > >We have a methodological admonition that the same object >should not appear in two domains. Different views of it >are OK, but not the same abstraction. Therefore this use of >Order Line would be illegal. So we would have to put Order >Line in a third domain and have OS and BS access that domain >as a service. All well and good, but just a tad clumsy and >certainly a trial for the Architect to make those >communications efficient. > >Q1: Is that admonition a warning or a statute? Certainly >it should raise a warning flag that something *might* be >awry with the level of abstraction of the domains and due >attention should be brought to bear. My problem is that I >don't think OS and BS are at different levels of >abstraction. Different subject matters, Yes. But different >levels of abstraction, I don't think so. Now if two >domains really are at the same level of abstraction, why >would it be a problem for them to have a couple of objects >in common? We aren't talking instances here -- the >instances would be physically different (in this case IOL >would be created from data passed over the bridge from POOL) >-- just the object characteristics. > >As it happens, in the debate I attacked the idea that the >Order Lines were really the same. In fact they are very >similar but there are some attributes that are different >(e.g., a Backorder Flag in the Invoice version). So it >turns out the OL had fields that were undefined in one >domain or the other. Therefore they really should be PO >Order Line and Invoice Order Line, as indicated in the >brackets above. This superficially resolves the problem. > >Unfortunately one thing is still niggling me. If OS and BS >were combined into a single domain I would not think twice >about having a supertype, Order Line, with POOL and IOL as >subtypes. My problem is that POOL and IOL share a common >is-a ancestry. If I split the domains that doesn't change >-- all I have done is to move the supertype's attributes >into each of the POOL and IOL objects. It seems to me that >commonality suggests that those objects *are* related >despite being different in detail -- that is, they are the >same to the extent that they share the is-a ancestry. > >Q2: Is this a legitimate concern or simply an attack of >middle aged angst? > >-- >H. S. Lahman There is nothing wrong with me that >Teradyne/ATD could not be cured by a capful of >Drano >179 Lincoln St. L51 >Boston, MA 02111-2473 >(Tel) (617)-422-3842 >(Fax) (617)-422-3100 >lahman@atb.teradyne.com > > > > Subject: RE: (SMU) Object reuse "Whipp David (ITC)" writes to shlaer-mellor-users: -------------------------------------------------------------------- > lahman wrote > Responding to Whipp... > > > Using a third domain would not help. Each domain that uses a > > concept must have a counterpart object that views the concept. > > So now you'd have an OL in all three domains! > > Say, what? We do this all the time with implementation, architectural, > and > realized domains. In the client domain we invoke the services via > transforms, etc. > that map directly as bridges. > I do not like the "domain is a library, wormhole is a library call" viewpoint. I prefer to project the services from the service domain into the client, and then to access them using standard OOA accessors. As a simple example, consider a domain that describes how peripheral registers are composed of bitfields. This service domain presents an interface that allows bitfield values to be accessed. But in my client domain, the values represented by the bitfields are modeled as attributes on objects. So the interface concept (the bitfields) are present in both domains. As another example, the PT training course uses Train and Icon counterparts. Again, the interface concept is seen in both domains. > > I feel happier omitting the words "levels of". I tend to view a > > system as an n-dimensional space (usually n=3) with > > abstraction planes cutting accross it at all sorts of wierd > > angles. And the planes aren't always flat nor thin. > > Hmmm. We were taught that the domain chart corresponds to different > levels of > abstraction going from very high at the Application domain to very low at > the > architectural domains. However, I have always chosen to interpret that as > one of > several viewpoints. B-) > > When I say "levels of abstraction" I am simply trying to capture the idea > that > entire domains represent a consistent degree of abstraction internally but > different > domains provide different degrees of abstraction. Are you suggesting that > a domain > can have two or more of your planes pass through it? > The domain is the plane. Several domains pass through a single concept. If all the planes are parallel (as the hierarchical "levels of abstraction" term implies) then concepts would have to be vertical pillars. IMHO, Vertical + horizontal partitioning is inadequate > > If two objects in different domains are different views on a > > common subject, then the two instances of the objects > > may be conceptually the same. It is quite common to connect > > domains using inheritence. > > Quite common for whom? B-) I have no idea how you would even do that in > the OOA. > Are you talking about architectural mechanisms? > OK, perhaps "common" is a bit strong. I first saw it done in a KC training course, so I know I'm not the only person who does it. BTW, I do not usually use inheritance for intra-domain relationships like subtyping. > First, I think they are always separate instances. The IOL instance > would be > created later in time, for one thing. The normal procedure would be for > whoever > creates and Invoice to extract the shared data from the OS domain and then > create > the Invoice in the BS with its own OLs. As I indicated, it OS and BS > were combined > in a single domain, then one would not hesitate to subtype IOL and POOL > from a > common ancestor. In that case they would be separate instances. They > remains > separate instances if the domains are separated. > I don't fully understand your example. Are you saying that you would consider IOL and POOL to be different roles of OL? If so, would you migrate the subtypes from one role to the other; or would you actually create two instances of the supertype, each with a different subtype? Do the two instances have different identities? > Second, I think the is-a does exist implicitly -- that is what is causing > my angst. > It would certainly exist explicitly if the domains were combined; the > domain model > would not be properly normalized unless the common > attributes > were collected in a supertype. When the domains are separated, there is > no need > for the supertype because there is only one subtype represented in each > domain. My > argument is that the supertype is still there implicitly when one views > the two > domains at the same time. > Yes, Each view of a concept "is-a" view on that concept. Your 2 domains are both views on a common concept. The question is, where is the concept defined? You don't seem to be arguing that IOL is a subtype of POOL, or visa versa; so you seem to need a 3rd domain to formalise the commonality between the two objects in their different domains. This 3rd domain would give you the superclass in the implementation. > > There are several ways of connecting domains. Sometimes, > > one domain is a specialisation of another, in which case > > inheritance is useful. Other times, the wiring loom metaphor > > is appropriate. 2 subject matter in the same abstraction are > > simply plugged together with function calls (adaptor pattern). > > I assume you are talking about translation mechanisms here. My problem > is with the OOA models. > I am talking about connection between domains. I.e. what is a bridge. I consider the domain chart to be part of the model. I often find it helpful to understand bridge semantics by giving an example of an implementation. But the semantics I am trying to understand are model level phenomina, not translation mechanisms. > > I am coming to believe that AOP is a very useful way to > > think about domain interactions. AOP aspects and SM > > domains appear to be synonymous. But the bridging > > model of AOP, although immature, has much greater > > potential that the wormholes of SM. > > Alas, I know nothing about AOP. I lead a sheltered life. > You may find it interesting to look at http://electra.prakinf.tu-ilmenau.de/~czarn/aop/ (don't give up too soon - it brings things together at the end) and, more generally, http://www.ccs.neu.edu/home/lieber/demeter.html Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp. San Jose, CA 95112 tel. (408) 501 6695. mailto:david.whipp@infineon.com Opinions are my own, factual statements may be in error Subject: Re: (SMU) Object reuse lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lynch... > It seems to me they are the application domains > of two distinct applications, which happen to have > superficial resemblance in one small area. I am not so sure about that -- provided the OS and BS have limited missions. I would be tempted to have a single application because of the very intimate communications needed between OS and BS. BS must be updated every time there is any kind of change to an order in OS -- not unlike the relation between a Radar Tracking domain and a GUI domain in an ATC system. I think I would like to formalize those kinds of communications explicitly in a bridge between domains to highlight their importance. I suppose you could have a domain in each application whose mission was to talk to the other application, but that strikes me as somewhat clumsy. Short of that, S-M doesn't offer much help in emphasizing inter-application communications -- the events just sort of magically appear. > Your resolution is not superficial, because > the objects are more different than they > are similar. I think your argument shows the > benefits of analysis before programming. :-) Yes, but I can see the other guy's viewpoint. An invoice line has all the same conceptual information as an order line (quantity, item description, price, discounts, etc.). There are usually only a couple of attributes that are unique to the invoice (e.g., backorder quantity). The values may be somewhat different (e.g., actual discounts, partial shipment quantity) but conceptually the attributes are the same. Also, I still have my angst over the implicit subtyping to a common ancestor. Imagine a system where one never sent an invoice until all the items, including backorders, had been shipped. In that case the identifying Invoice number could be the identifying Order number. Then each invoice line might be identified by {Order Number, Line Number} -- just like the order lines. If they are identified the same way and the attributes are conceptually the same except for a couple of minor subtype specializations, then it is hard to say that they are not closely related. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) Object reuse lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to White... > Hi H.S., Can't definitively answer either question (too much > middle-aged angst of my own), but, it really does sound like > the domain should have been 'accounting'. With that aside, it > sounds like the same INSTANCES are needed in each domain > (excepting the addition of the BackOrder flag) which sounds to > me like a domain identification problem. In the original postulated context the domain missions were not very complicated. The OS was very simple -- basically just data entry. The BS was almost as simple -- it basically just created forms and routed them to a printer. One could argue that these could be subsystems in a single domain, say, Order I/O. I think, though, there are a couple of situations where this might not be desirable. If the domains are more complex than this -- as Lynch points out they could be separate applications even, as most most such systems are in larger companies -- then I think it is desirable to separate them. I think there is a limit to the size you want for domains because internally they are highly coupled (albeit less so than a non-FSM system) and that can make them more difficult to maintain. The other justification is anticipating the future. For example, if you are only doing simple minded manual order entry from a single point now but you anticipate computer order input from certain large customers in the future, you probably want to put the OS functionality into a domain rather than a subsystem so that you have the bridge firewalls to help pop fundamentally new implementation in. Similarly, you can buy individual order entry or billing systems off the shelf. If you think you might be doing that in the future, then you also want the domain firewalls. Of course, in order to keep them separate you need to rationalize that they are separate subject matters. I would argue that one can because what they do is well defined and significantly different in detail. Also, though they have intimate data links to one another, I think their relationships to other enterprise systems are quite different, especially if they have added complexity than simple I/O. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) Object reuse lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > I do not like the "domain is a library, wormhole is a library call" viewpoint. I > prefer > to project the services from the service domain into the client, and then to > access > them using standard OOA accessors. > > As a simple example, consider a domain that describes how peripheral registers > are composed of bitfields. This service domain presents an interface that allows > bitfield values to be accessed. But in my client domain, the values represented > by the bitfields are modeled as attributes on objects. So the interface concept > (the bitfields) are present in both domains. I guess I do not understand your distinction. When you perform some operation on the attribute in the client domain, doesn't that operation map to an operation in the service domain as a bridge? Let's consider an example where there are interesting operations. Suppose I have a domain with three attributes that are complex numbers. The domain doesn't care about the real and imaginary parts, so each number is abstractly modeled as a single attribute value. Now supposed one attribute is derived by adding the other two. Assuming I don't have your DFD solution available, I would read both independent attributes, pass them to a transform, and write the resulting value to the derived attribute. That transform has to understand complex number arithmetic though the domain doesn't. At translation time the attribute values would probably be handles to a service domain's Complex Number class and the transform would map to a service domain's Complex Number's Add function. How would you do that differently? > The domain is the plane. Several domains pass through a single concept. If > all > the planes are parallel (as the hierarchical "levels of abstraction" term > implies) > then concepts would have to be vertical pillars. IMHO, Vertical + horizontal > partitioning is inadequate Seriously esoteric. However, I agree that the layer model is overly simplistic. > > > If two objects in different domains are different views on a > > > common subject, then the two instances of the objects > > > may be conceptually the same. It is quite common to connect > > > domains using inheritence. > > > > Quite common for whom? B-) I have no idea how you would even do that in > > the OOA. > > Are you talking about architectural mechanisms? > > > OK, perhaps "common" is a bit strong. I first saw it done in a KC training > course, > so I know I'm not the only person who does it. I didn't take that course, so I still don't see how inheritance applies across domains. > BTW, I do not usually use > inheritance for intra-domain relationships like subtyping. A fascinating statement. You subtype without inheritance? Or do you simply eschew subtyping? And for whichever answer: Why? > > First, I think they are always separate instances. The IOL instance > > would be > > created later in time, for one thing. The normal procedure would be for > > whoever > > creates and Invoice to extract the shared data from the OS domain and then > > create > > the Invoice in the BS with its own OLs. As I indicated, it OS and BS > > were combined > > in a single domain, then one would not hesitate to subtype IOL and POOL > > from a > > common ancestor. In that case they would be separate instances. They > > remains > > separate instances if the domains are separated. > > > I don't fully understand your example. Are you saying that you would consider IOL > and POOL to be different roles of OL? If so, would you migrate the subtypes from > one role to the other; or would you actually create two instances of the > supertype, > each with a different subtype? Do the two instances have different identities? I was responding to your statement that you didn't like the inferred idea that I was using two objects for the same instance. Each object has its own set of instances. I consider IOL and POOL to be different subtypes of OL (if they were in the same domain). When in different domains they are clearly different instances; the issue is whether they abstractions (objects) would be essentially the same because of the implicit is-a subtype relationship. > Yes, Each view of a concept "is-a" view on that concept. Your 2 domains are both > views on a common concept. The question is, where is the concept defined? Yes, that is the crux. > You don't seem to be arguing that IOL is a subtype of POOL, or visa versa; so you > seem to need a 3rd domain to formalise the commonality between the two > objects in their different domains. This 3rd domain would give you the superclass > in the implementation. That is my problem. If there were a third domain with the supertype, then the implication is that IOL and POOL are the same abstraction except for specialization. Given that the specialization is trivial (POOL is essentially a subset of IOL except for one or two attributes), then it would seem to be illegal to have them in different domains. BTW, I assume we are speaking conceptually about the third domain with the supertype -- if it existed, then we would have essentially the same object abstraction in three domains, which would be a worse violation of the statute. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: RE: (SMU) Object reuse "Steve Tockey" writes to shlaer-mellor-users: -------------------------------------------------------------------- > lahman writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Things have been quiet for a spell, so I guess it is time to > wake everyone up. So you will please forgive the somewhat groggy response? > Some time ago I was in a debate with someone on another > forum about the way to model a particular problem. > Basically the guy had an Order System that processed POs and > a Billing System that processed Invoices. These were > separate subsystems that conform to our notion of domains. > This was modeled as > > Order System (OS): > > Purchase Order (PO)<-------->> Order Line (OL) [PO Order > Line (POOL)] > > Billing System (BS): > > Invoice (I) <-------->> Order Line (OL) [Invoice Order > Line (IOL)] > > so that the Order Line object was being reused. (Ignore the > stuff in brackets for now.) > > We have a methodological admonition that the same object > should not appear in two domains. Different views of it > are OK, but not the same abstraction. My question is, "But is it really the same abstraction?" I've never built a purchase order system nor a billing system, so I'd be making assumptions here. And rather than do that, I'll ask for clarification: Is there some *business-level* connection between a purchase order and an invoice? For example, does the customer's original request lead to the creation of a Purchase Order followed at some point in time later the creation of the Invoice to be sure we get paid for that order? If the purchase order and the invoice are not connected in any way *at the business level* then I say you have two distinct domains that happen to have a structural similarity (e.g., there's a thing that represents a collection (selection?) and an associative thing that relates the collection to the individual things that were collected). To me, this could be an example of an "analysis pattern". So I wouldn't look for any deeper meaning in them in the same sense that one occurrence of the analysis pattern for representing a hierarchical structure (say, in an organizational chart editing application) is not necessarily meaningfully connected to another occurrence of the same pattern (say, in an operating system's directory and file management subsystem). If there is some *business-level* connection between them, then I think that your Purchase Order and Invoice are really the "views" of some underlying domain that you haven't quite fully uncovered yet (e.g., Don White's "Accounting" domain). It might even be reasonable to look at Purchase Order and Invoice as being the (projections? of the) migrating subtypes of a unified underlying concept representing the business cycle where a customer asks for some stuff, we send it to them, then we politely demand that they pay us for the service. Purchase Order captures the state(s) when the customer has asked for it but we haven't sent it whereas Invoice represents the state(s) in which we've sent in but we haven't been paid yet. In the sense of Don White's "Accounting" domain, there's probably a reason to remember that the customer did order, we did ship, and they did pay, so a "Invoice-that-has-been-paid" might be another state. Likewise, a collections concept might also be required for when the customer asked, we sent, then we asked for money but the deadbeat customer hasn't paid us and it looks like they're not likely to pay us without further coercion. > ... > > Q1: Is that admonition a warning or a statute? Certainly > it should raise a warning flag that something *might* be > awry with the level of abstraction of the domains and due > attention should be brought to bear. My general policy is that things like this be treated as warnings. It's a red flag that says "go look there harder and make sure you're not missing some important fact". > My problem is that I > don't think OS and BS are at different levels of > abstraction. Different subject matters, Yes. But different > levels of abstraction, I don't think so. I agree that they're not different levels of abstraction. But I'm not sure that they're entirely different subject matters, either. Again, the key question is whether or not there is any *business- level* correlation between the two. If not, then they should be considered different subject matters. If there is a business-level correlation between them, then I think that (by definition) they are the same subject matter. By the way, it's been a long-long time since I've done any serious S-M stuff on a large scale, but aren't you allowed to slice up a domain into sub-domains so that the sub-domains are at the same level of abstraction but they have minimally-overlapping subject matters? > Q2: Is this a legitimate concern or simply an attack of > middle aged angst? Couldn't it be some of each? :^) -- steve Subject: RE: (SMU) Object reuse "Lynch, Chris D. SDX" writes to shlaer-mellor-users: -------------------------------------------------------------------- >Also, I still have my angst over the implicit subtyping to a common ancestor. >Imagine a system where one never sent an invoice until all the items, including >backorders, had been shipped. In that case the identifying >Invoice number could be the identifying Order number. Then each invoice line might >be identified by {Order Number, Line Number} -- just like the order lines. If they >are identified the same way and the attributes are conceptually the same except for >a couple of minor subtype specializations, then it is hard to say that they are not >closely related. >From the original description I (mis)understood one application to be for purchasing and one for sales, in which I *Order* corned beef, bread and potatoes from a supplier, and *invoice" a customer for hash. This would be different objects in different domains in (probably) different applications. Now that I understand that the invoice is for a *related* order, I agree with whoever said both objects belong in one domain, e.g. Order Processing. A domain boundary between the invoice and the order smacks of some sort of artificial implementation boundary. In this case, invoicing is an activity of a late-life phase of an order. One might even take the extreme position that there is no need for an InvoiceLine object; such a concept might be subsumed by a QuantityShipped attribute on the OrderLine. -Chris ------------------------------------------- Chris Lynch Abbott Ambulatory Infusion Systems San Diego, Ca LYNCHCD@HPD.ABBOTT.COM ------------------------------------------- Subject: RE: (SMU) Object reuse "Whipp David (ITC)" writes to shlaer-mellor-users: -------------------------------------------------------------------- [I hope this message comes out readable. I've just switched to Outlook '98, and I'm editing it in word. I apologize if the formatting is messed up] lahman wrote: I guess I do not understand your distinction. When you perform some operation on the attribute in the client domain, doesn't that operation map to an operation in the service domain as a bridge? It depends how I synchronize the attributes. However, I try to minimize explicit wormholes. Transforms and accessors invoke the services transparently. This seems to be an important justification for counterpart objects. I didn't take that course, so I still don't see how inheritance applies across domains. Let's take a simple example. Suppose we have a service domain that describes the client-server scenario. It has "client" and "server" objects. Another domain has "customer" and "shop-keeper" objects. Yet another has "computer" and "printer" objects. "customer" and "computers" are clients; "shop-keeper" and "printer" are servers. The bridges between the domains define these is-a relationships. In the eventual implementation, the service domain provides the superclasses "client" and "server". The other objects inherit from these. A more interesting question is what services are provided by the service domain, and how are they accessed. It is possible that the service domain does little more than identify architecturally significant abstractions to help the code generator (e.g. messages between clients and servers go over a network). Perhaps the architecture needs to establish a transport connection between the objects, and it needs to know then to connect and disconnect. It could get this information be watching the dynamics of the application-level connection between the computer and printer. When the relationship is linked (assoc object created), then the connection is opened; when the relationship is deleted, the connection is closed. There will also be some interaction with the relationship's assigner state model. Perhaps the assigner wants to search for any available network printer in a given location. The superclass may provide services that help realize the application accessor "find one printer with location=room101 and status=available". In an extreme case, the assigner may use explicit wormholes. > BTW, I do not usually use > inheritance for intra-domain relationships like subtyping. A fascinating statement. You subtype without inheritance? Or do you simply eschew subtyping? And for whichever answer: Why? Subtyping and subclassing are subtly different. The state pattern is more appropriate than inheritance for realizing a subtyping relationship. Migration is difficult if you map an is-a relationship directly to inheritance. Of course, the state pattern does use inheritance, so perhaps my original statement was slightly inaccurate. Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp. San Jose, CA 95112 tel. (408) 501 6695. mailto:david.whipp@infineon.com Opinions are my own, factual statements may be in error Subject: Subsystems [Was Re: (SMU) Object reuse] Ladislav Bashtarz writes to shlaer-mellor-users: -------------------------------------------------------------------- Steve Tockey wrote: > By the way, it's been a long-long time since I've done any serious > S-M stuff on a large scale, but aren't you allowed to slice up a > domain into sub-domains so that the sub-domains are at the same > level of abstraction but they have minimally-overlapping > subject matters? I've always called them subsystems, which I believe is the correct legal S-M term. These are indeed sliced up portions of a single domain OIM. The objects on the 'edges' of each subsystem have relationships with objects on the 'edges' of other subsystems that form the sliced up domain. I've always sliced domains so that the cut would break the least number of relationships, while retaining some cohesion among connected objects. Naming the subsystems can be a challenge. Setting up the initial domain partitioning on large complex projects is non-trivial and one needs all the help one can get. A CASE tool is not much help on such projects imho, unless it can be used to record a domain chart and supports domain partitioning into these subsystems. This is extremely critical for proper communications among project teams not to mention management presentations and project promotion. Is there anyone else out there who is using S-M for large and complex projects that do not involve embedded real-time systems? I suspect that there are not many of us. Please respond to this forum or email me directly. -- Ladislav Bashtarz Engineering Matters (tm) ladislav@engmat.com Subject: RE: (SMU) Object reuse "Lynch, Chris D. SDX" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp --------------------- >"customer" and "computers" are clients; "shop-keeper" and "printer" are >servers. The bridges between the domains define these is-a relationships. In >the eventual implementation, the service domain provides the superclasses >"client" and "server". The other objects inherit from these. Where are the is-a relationships? You're not saying, I hope, that a customer "is-a" shopkeeper? That a computer "is-a" printer? -Chris ------------------------------------------- Chris Lynch Abbott Ambulatory Infusion Systems San Diego, Ca LYNCHCD@HPD.ABBOTT.COM ------------------------------------------- Subject: RE: (SMU) Object reuse "Whipp David (ITC)" writes to shlaer-mellor-users: -------------------------------------------------------------------- Lynch, Chris D. SDX" > replied to me: "customer" and "computers" are clients; "shop-keeper" and "printer" are servers. The bridges between the domains define these is-a relationships. In the eventual implementation, the service domain provides the superclasses "client" and "server". The other objects inherit from these. Where are the is-a relationships? You're not saying, I hope, that a customer "is-a" shopkeeper? That a computer "is-a" printer? No, I'm saying that the printer is-a server and the computer is-a client; Or the shop-keeper is-a server and the customer is-a client. The "is-a" goes across the bridge. The shopping and printing domains are complete in themselves. Assigner state models handle the assigning of a computer to printer, or shop-keeper to customer. However, by using the service domain to formalize the pattern of their interactions, the architecture is de-coupled from the application. The translator only needs to know how to implement client-server relationships (with client and server base classes). It does not need to know how to implement the shopKeeper-customer relationship. You could say that the service domain is formalizing the coloration of the model. I will leave you to decide whether my client-server domain is a service domain or an architectural domain. I think the distinction is meaningless. In some systems is may not be architecturally significant. It is simply a true fact about the concept represented by the shopKeeper-customer interaction. In this situation it may still be useful to use the service because the domain comes with a set of generic tests which can be used with the application. Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp. San Jose, CA 95112 tel. (408) 501 6695. mailto:david.whipp@infineon.com Opinions are my own, factual statements may be in error Subject: Re: (SMU) Object reuse lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Tockey... > My question is, "But is it really the same abstraction?" I've > never built a purchase order system nor a billing system, so > I'd be making assumptions here. And rather than do that, I'll > ask for clarification: > > Is there some *business-level* connection between a purchase > order and an invoice? For example, does the customer's original > request lead to the creation of a Purchase Order followed at > some point in time later the creation of the Invoice to be sure > we get paid for that order? I would say they are very closely connected in the business sense. Probably 90% of the information on an Invoice is identical to and, more importantly, derived directly from a PO. If those objects were in the same domain your would very likely have a 1:Mc (many because of partial shipments; conditional because you may not fill the order) between POs and Invoices. And there is a very fundament business process that connects them: Customer Order => [Build Product] => Ship Product => Bill Customer. > If there is some *business-level* connection between them, then > I think that your Purchase Order and Invoice are really the > "views" of some underlying domain that you haven't quite fully > uncovered yet (e.g., Don White's "Accounting" domain). It might > even be reasonable to look at Purchase Order and Invoice as being > the (projections? of the) migrating subtypes of a unified underlying > concept representing the business cycle where a customer asks for > some stuff, we send it to them, then we politely demand that they > pay us for the service. An interesting idea. But I have a hard time getting a handle on what that underlying, undiscovered entity might be. One way to view the problem is to note that the PO and the Invoice, though closely related by business process, are quite different critters at the conceptual level. I suspect most Business Systems Analysts would look at you strangely if you tried to suggest that they were aspects of the same thing in the real world. Yet this may be the real source of my angst -- because the abstractions do not capture the conceptual difference I am essentially using the same abstractions at the order line level to describe two different things. This is subtlely different that having the same abstraction for the same object show up in two domains, but I suspect it should be viewed as just as illegal because of the ambiguity. Which leads to the question: how does one capture conceptual differences in a notation focused upon modeling the world in data? I have some doubts about the migration metaphor, BTW. The PO does not go away when the Invoice is created. > I agree that they're not different levels of abstraction. But I'm > not sure that they're entirely different subject matters, either. > Again, the key question is whether or not there is any *business- > level* correlation between the two. If not, then they should be > considered different subject matters. If there is a business-level > correlation between them, then I think that (by definition) they > are the same subject matter. I am not sure I like the implications of the last sentence. Taken to its logical conclusion the OOA of most business applications would likely end up as a single domain. Typical business systems (applications) are broken down by things like General Ledger, Accounts Payable, Payroll, etc. Within any one of these there is enough complexity to warrant separate domains. For example, a General Ledger probably wants service domains for at least things like transaction processing (i.e., database interface), balance sheet, and P&L. Yet I would argue that everything within one of those systems is carnally related in a business sense -- unless we have a different definition of 'business level correlation'. B-) > By the way, it's been a long-long time since I've done any serious > S-M stuff on a large scale, but aren't you allowed to slice up a > domain into sub-domains so that the sub-domains are at the same > level of abstraction but they have minimally-overlapping > subject matters? Yes, there is the idea of subsystems. However, I think there are some strong reasons for limiting the complexity of a domain and the concept of 'subject matter' is sufficiently loose so that there is a lot of room to play. B-) -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) Object reuse lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lynch... > From the original description I (mis)understood one > application to be for purchasing and one for > sales, in which I *Order* corned beef, bread and > potatoes from a supplier, and *invoice" a customer > for hash. This would be different objects in > different domains in (probably) different applications. Close, but not quite. The PO system handles and *incoming* order for a product. When the product has been shipped, the customer submitting the order is billed via an Invoice. > Now that I understand that the invoice is > for a *related* order, I agree with whoever > said both objects belong in one domain, > e.g. Order Processing. A domain boundary > between the invoice and the order > smacks of some sort of artificial > implementation boundary. For anything other than a Ma & Pa operation the Order System and Billing Systems would be standalone applications that talk to one another through a database. In these days of open system databases, they are even sold individually OTS on a Plug&Play basis. But if they were in a single application, I believe they would almost certainly be set up as separate service domains (rather than implementation domains). The two primary reasons for doing so are: supporting domain reuse (i.e., plugging in a commercial package later) and to make the system more maintainable (at the OOA level). I believe the concept of 'subject matter' is sufficiently ill defined to support such distinctions. In the end the subject matter of the OS domain is orders while the subject matter of the BS domain is invoices -- which are quite different things in the business systems world. Furthermore, I think the end user would think of OS and BS as being distinctly different processing. The end user would certainly think of them as separate subsystems. The boundaries of such subsystems might have been defined historically based upon data processing partitioning, but those boundaries are today perceived as The Way Things Are. Finally, I would point out that there is always some other client domain or application that uses both as services. That third domain coordinates those services with others, such as Production, Shipping, and Accounts Receivable. > In this case, invoicing is an activity of > a late-life phase of an order. One might > even take the extreme position > that there is no need for an > InvoiceLine object; such a concept might > be subsumed by a QuantityShipped > attribute on the OrderLine. The problem with a single OrderLine is that QuantityShipped is undefined until the product is shipped, which is a no-no for a normalized OOA. There are also some practical reasons in the problem space for them to be different. For example, one can have a partial shipment of a multiple unit order item. This causes two or more Invoice Order Lines to be generated as the partial shipments occur whose quantities sum to the order line quantity. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) Object reuse lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > [I hope this message comes out readable. I've just switched to Outlook '98, > and I'm editing it in word. I apologize if the formatting is messed up] I get a lot of line wrap on quoted passages. I suspect that Outlook has a different definition of Tab than Netscape because the indenting on the quoted text is about a dozen characters, guaranteeing line wrap if your outgoing message is set to 40-60 characters. > I guess I do not understand your distinction. When you > perform some operation on > the attribute in the client domain, doesn't that operation > map to an operation in > the service domain as a bridge? > > It depends how I synchronize the attributes. However, I try to minimize > explicit wormholes. Transforms and accessors invoke the services > transparently. This seems to be an important justification for counterpart > objects. I'm still confused. So the transform invoked is associated with a counterpart object. It seems to me that the transform still has to map into a bridge if the real object with the functionality/state machine is in another domain. Or do you move the object's state machine/functionality into the client domain's counterpart? If so, that seems like a major abuse of the admonition against the same objects in two domains. > I didn't take that course, so I still don't see how > inheritance applies across > domains. > > Let's take a simple example. Suppose we have a service domain that describes > the client-server scenario. It has "client" and "server" objects. Another > domain has "customer" and "shop-keeper" objects. Yet another has "computer" > and "printer" objects. > > "customer" and "computers" are clients; "shop-keeper" and "printer" are > servers. The bridges between the domains define these is-a relationships. In > the eventual implementation, the service domain provides the superclasses > "client" and "server". The other objects inherit from these. The domain with 'client' and 'sever' sounds like an architectural domain rather than a service domain. > A more interesting question is what services are provided by the service > domain, and how are they accessed. It is possible that the service domain > does little more than identify architecturally significant abstractions to > help the code generator (e.g. messages between clients and servers go over a > network). > > Perhaps the architecture needs to establish a transport connection between > the objects, and it needs to know then to connect and disconnect. It could > get this information be watching the dynamics of the application-level > connection between the computer and printer. When the relationship is linked > (assoc object created), then the connection is opened; when the relationship > is deleted, the connection is closed. > > There will also be some interaction with the relationship's assigner state > model. Perhaps the assigner wants to search for any available network > printer in a given location. The superclass may provide services that help > realize the application accessor "find one printer with location=room101 and > status=available". In an extreme case, the assigner may use explicit > wormholes. All this sounds a lot like you are combining the application OOA with the architecture OOA with a dab of colorization thrown in. > Subtyping and subclassing are subtly different. The state pattern is more > appropriate than inheritance for realizing a subtyping relationship. > Migration is difficult if you map an is-a relationship directly to > inheritance. Of course, the state pattern does use inheritance, so perhaps > my original statement was slightly inaccurate. I am not sure I buy that distinction at the OOA level. The State pattern is really only inserting a polymorphic wrapper around a method interface (i.e., TCPConnection becomes the wrapper while TCPState handles the original responsibilities of TCPConnection). It is not clear to me how this is relevant in an OOA where one replaces method interfaces with events. This decoupling is effectively done with the table that maps a supertype event into the event of the current subtype. Even in the State pattern the core subtyping and subtype migration still exists in exactly the same form it appears in an OOA. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) Subsystems [Was Re: Object reuse] lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Bashtarz... > I've always called them subsystems, which I believe is the correct legal > S-M term. These are indeed sliced up portions of a single domain OIM. > The objects on the 'edges' of each subsystem have relationships with > objects on the 'edges' of other subsystems that form the sliced up > domain. I've always sliced domains so that the cut would break the least > number of relationships, while retaining some cohesion among connected > objects. Naming the subsystems can be a challenge. One could argue that if you can name them easily, then they might qualify as a separate subject matter. > Setting up the initial domain partitioning on large complex projects is > non-trivial and one needs all the help one can get. A CASE tool is not > much help on such projects imho, unless it can be used to record a domain > chart and supports domain partitioning into these subsystems. This is > extremely critical for proper communications among project teams not to > mention management presentations and project promotion. The CASE tools are getting better at real support for subsystems, but they aren't there yet. So the practicalities of project management cause us to looking for ways to split out separate domains whenever the object count starts to get above 20-30. > Is there anyone else out there who is using S-M for large and complex > projects that do not involve embedded real-time systems? I suspect that > there are not many of us. Please respond to this forum or email me > directly. Superficially, probably not too many. B-) However, the reality is that a lot of large systems are only partially R-T/E. We have a device driver that goes to 200 KLOC spread over three domains -- but that isn't even embedded in the classical sense and real time issues are only relevant for a small portion of it. Meanwhile, all the other software in the system is not R-T/E at all; it just moves bits from one pile to another like everybody else's software. For the non-driver portion of the system we don't even have to worry about asynchronous processing except peripherally. BTW, I do not consider asynchronous processing to be R-T per se. It is so commonplace nowadays for larger systems because of distributed processing, etc. that it has taken on the status of a fact of life. So I only consider something to be R-T if there are time constraints at the hardware level on when communications take place or how they are sequenced. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: (SMU) Objects aren't classes "Whipp David (ITC)" writes to shlaer-mellor-users: -------------------------------------------------------------------- I was at a talk by James Rumbaugh last night, and he was trying to explain the concept of UML Roles; and how they differ from classes and objects. A UML role is modeled as a prototypical instance; and is found in a collaboration diagram. For some reason, people were having difficulty with the concept, and he was bombarded with questions like "is it a class?" - no, "so is it an object?" - no, "so is it a state?" ... The thing that stuck me is that a UML prototypical instance is the same thing as a Shlaer-Mellor object. Most people, myself included, have tended to map SM objects to UML classes - and we have subsequently had problems working out what the UML equivalent of a "Domain" is. If we use the correct mapping, then SM seems to fit better into UML: An object -> Prototypical Instance (role) A domain -> Collaboration Subsystem -> Package Domain-Chart -> um, well, it seems to be the use case diagram. Thoughts anyone? Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp. San Jose, CA 95112 tel. (408) 501 6695. mailto:david.whipp@infineon.com Opinions are my own, factual statements may be in error Subject: Re: (SMU) Objects aren't classes Daniel Dearing writes to shlaer-mellor-users: -------------------------------------------------------------------- I was interested to hear a mention of the "Other Side" on ESMUG. I have not previously seen much acknowledgement of the UML camp in this forum. I have done some research into both areas (SMOOA and UML) and I have seen various papers about how the UML notation can be used to express SM models. I am slightly confused about your mapping of the Domain chart onto the Use-Case diagram. As far as I am aware, use-case diagrams are not really like anything that the SMOOA notations offer. They are primarily intended to offer a way of capturing behavioural requirements in the form of "the things that the end user wants to do with the system". I certainly wouldn't expect Use-cases to capture information about software architectures and implementation technologies in the same way that a domain chart would. I am currently driving a software project in my company based on a combination of use-cases and the full (?) SMOOA method. We are hoping to supplement the SMOOA process by initially defining use-cases for the system, to help lead us into the domain analysis. I believe this technique has been successfully used by one or two organisations already (I have seen a few postings from one or two people). The process we are proposing to use is 1 Textual requirements capture (requirements specification document) 2 Use-case capture (graphical use-case diagrams) 3 Definition of release content based on use cases (See Note Below) 4 Definition of preliminary domain chart (the use-cases should provide some clues for candidate doamins) 5 Definition of sequence diagrams based on use-cases and preliminary domains 6 Detailed definition of bridges 7 Preliminary Domain object models (for each domain of interest) 8 Domain object model and subsequent domain analysis for each release (based on the release definitions) then simulation, testing, code generation etc NOTE: We plan to implement the software in a number (currently 3) of phased internal releases of software, each offering different levels of functionality. The use-cases will hopefully reveal enough information about the required functionality of the system to help us define the releases. So in summary, I think the use-cases will provide a perspective which will supplement the SMOOA, but I don't think they are the same as the domain chart. I could envisage how you could make a collaboration diagram which looked like a domain chart (but I don't think UML forces you to do this). I would be interested to hear any comments on the above Regards, Daniel Dearing :-) Plextek Limited Communications Technology Consultants >>> "Whipp David (ITC)" 23/04/99 19:14:30 >>> "Whipp David (ITC)" writes to shlaer-mellor-users: -------------------------------------------------------------------- I was at a talk by James Rumbaugh last night, and he was trying to explain the concept of UML Roles; and how they differ from classes and objects. A UML role is modeled as a prototypical instance; and is found in a collaboration diagram. For some reason, people were having difficulty with the concept, and he was bombarded with questions like "is it a class?" - no, "so is it an object?" - no, "so is it a state?" ... The thing that stuck me is that a UML prototypical instance is the same thing as a Shlaer-Mellor object. Most people, myself included, have tended to map SM objects to UML classes - and we have subsequently had problems working out what the UML equivalent of a "Domain" is. If we use the correct mapping, then SM seems to fit better into UML: An object -> Prototypical Instance (role) A domain -> Collaboration Subsystem -> Package Domain-Chart -> um, well, it seems to be the use case diagram. Thoughts anyone? Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp. San Jose, CA 95112 tel. (408) 501 6695. mailto:david.whipp@infineon.com Opinions are my own, factual statements may be in error Subject: Re: (SMU) Objects aren't classes lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > The thing that stuck me is that a UML prototypical instance is the same > thing as a Shlaer-Mellor object. Most people, myself included, have tended > to map SM objects to UML classes - and we have subsequently had problems > working out what the UML equivalent of a "Domain" is. If we use the correct > mapping, then SM seems to fit better into UML: > > An object -> Prototypical Instance (role) > A domain -> Collaboration > Subsystem -> Package > Domain-Chart -> um, well, it seems to be the use case diagram. I don't know enough about the nuances of UML to comment on the mapping of objects to roles. But my understanding of packages is that they define an interface that encapsulates a suite of related objects with a particular responsibility -- which sounds more like a domain. We might identify subsystems within a domain as a practical need for partitioning development effort, but the partition is conceptual (i.e., minimum cut on the OCM) rather than formalized in the notation as an interface. Also, the collaboration strikes me as more of a souped up OCM. While we use use cases informally to allocate functionality both among and within domains, the details of the diagrams seem unsuited to distinguishing between client/service requirements flows and communications flows that needs to be done to define bridges. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) Objects aren't classes lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Dearing... > I am slightly confused about your mapping of the Domain chart onto the > Use-Case diagram. As far as I am aware, use-case diagrams are not really > like anything that the SMOOA notations offer. They are primarily > intended to offer a way of capturing behavioural requirements in the > form of "the things that the end user wants to do with the system". I > certainly wouldn't expect Use-cases to capture information about > software architectures and implementation technologies in the same way > that a domain chart would. I believe use cases can be useful for allocating responsibilities among domains and for defining the general character of the bridges between domains. At a large scale they can clearly be useful for defining the functionality of the domains. As such I don't see any inconsistency about using them to define the functionality of the implementation and architectural domains -- at least at the overview level of the DC. The problem with using use case to define domains is that the domain concept is more sophisticated. The idea of 'subject matter' also implies the relative abstraction of the domain and a more general concept of what the domain is about than simply its functionality. To this extent they can't easily capture ideas related, say, to translation that are important to architectural domains. This is why we try to define our domains up front and then apply use cases to allocation the responsibilities. One view of the bridges between domains is that they describe the flow of requirements between client and service domains. Use cases are pretty good at this sort of thing because most requirements involve the allocation of functionality. However, they aren't so good at describing the other important aspect of bridges: communications, which are about data flows and messages. > I am currently driving a software project in my company based on a > combination of use-cases and the full (?) SMOOA method. We are hoping to > supplement the SMOOA process by initially defining use-cases for the > system, to help lead us into the domain analysis. I believe this > technique has been successfully used by one or two organisations already > (I have seen a few postings from one or two people). > > The process we are proposing to use is > 1 Textual requirements capture (requirements specification document) > 2 Use-case capture (graphical use-case diagrams) > 3 Definition of release content based on use cases (See Note Below) > 4 Definition of preliminary domain chart (the use-cases should provide > some clues for candidate doamins) This step I worry about for the reasons above. > 5 Definition of sequence diagrams based on use-cases and preliminary > domains > 6 Detailed definition of bridges > 7 Preliminary Domain object models (for each domain of interest) > 8 Domain object model and subsequent domain analysis for each release > (based on the release definitions) > > then simulation, testing, code generation etc > > NOTE: We plan to implement the software in a number (currently 3) of > phased internal releases of software, each offering different levels of > functionality. The use-cases will hopefully reveal enough information > about the required functionality of the system to help us define the > releases. Yes. This is the best way to use use cases in S-M, IMO. In our shop the dates and the resources are fixed so the thing we have to play with is the feature set to meet schedules. We do the DC and the IM for the whole project but we develop the details feature-by-feature. We use use cases to determine exactly what we have to model and implement for attributes, states, and actions for each feature. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: RE: (SMU) Objects aren't classes "Whipp David (ITC)" writes to shlaer-mellor-users: -------------------------------------------------------------------- Daniel Dearing > wrote: I am slightly confused about your mapping of the Domain chart onto the Use-Case diagram [...] So in summary, I think the use-cases will provide a perspective which will supplement the SMOOA, but I don't think they are the same as the domain chart. I could envisage how you could make a collaboration diagram which looked like a domain chart (but I don't think UML forces you to do this). My use of the Use-Case diagram for showing the context of collaborations was driven from the UML. In the UML, collaborations are shown as an oval on a Use-Case diagram with "realizes" relationships to Use Cases. I don't feel it is a big stretch to add the possibility of relationships between collaborations. Use-cases are not domain charts, but the relationships between Use-Cases do seem to be at a similar abstraction to the relationships between domains. Jacobson (et al) talk about an "Use-Case driven, architecture-centric, development process". I may disagree with their definition of architecture, but I can see benefits in tying the domain chart to the use cases. Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp. San Jose, CA 95112 tel. (408) 501 6695. mailto:david.whipp@infineon.com Opinions are my own, factual statements may be in error Subject: RE: (SMU) Objects aren't classes "Whipp David (ITC)" writes to shlaer-mellor-users: -------------------------------------------------------------------- Lahman wrote: I don't know enough about the nuances of UML to comment on the mapping of objects to roles. That's not really surprising. The books don't describe them very well. Their definition does seem to match the object definition from object-lifecycles (P12). SM defines a process by which objects may be translated to classes (translation). I feel it may be easier to explain this process in a UML context if we avoid uses classes as both a start-point and end-point for the translation. But my understanding of packages is that they define an interface that encapsulates a suite of related objects with a particular responsibility - which sounds more like a domain. We might identify subsystems within a domain as a practical need for partitioning development effort, but the partition is conceptual (i.e., minimum cut on the OCM) rather than formalized in the notation as an interface. A package has very semantics beyond encapsulation. To quote the UML User Guide: "You use packages to arrange your modeling elements into larger chunks that you can manipulate as a group". The purpose of a subsystem is to break down a large group of objects into manageably sized chunks. The two definitions come from different perspectives, but the intent is the same: to manage large diagrams. Also, the collaboration strikes me as more of a souped up OCM. I can see this argument. I agree that a collaboration *diagram* is simply a way of showing interaction patterns. However, the concept of a collaboration is more than this. To quote the user guide again (P370): "In the UML, you model mechanisms using collaborations. A collaboration gives its name to the conceptual building blocks of your system" On page 371, this is refined to "A collaboration is a society of classes, interfaces, and other elements that work together to provide some cooperative behavior that's bigger than the sum of all its parts." Finally, on page 372, we are told "Collaborations have two aspects: a structural part ... and a behavioral part..." It seems clear that a collaboration is more that the collaboration diagram. When I examine the definition and intent of the collaboration (and even the symbol - an ellipse), I see it as equivalent to domains. While we use Use Cases informally to allocate functionality both among and within domains, the details of the diagrams seem unsuited to distinguishing between client/service requirements flows and communications flows that needs to be done to define bridges. I can refer you to page 375 of the user guide: "Organizing Collaborations". Their example seems pertinent to your question from last week: they show, on a use-case diagram, the "Place order" use case being realized in the "Order Management" collaboration; and that being refined by the "Order Validation" collaboration. There are obviously some differences between collaborations and domains, but I think the UML is flexible enough to absorb them without breaking the spirit of SM. Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp. San Jose, CA 95112 tel. (408) 501 6695. mailto:david.whipp@infineon.com Opinions are my own, factual statements may be in error Subject: RE: (SMU) Objects aren't classes peterf@pathfindersol.com (Peter J. Fontana) writes to shlaer-mellor-users: -------------------------------------------------------------------- At 10:44 AM 4/26/99 -0700, shlaer-mellor-users@projtech.com wrote: >"Whipp David (ITC)" writes to shlaer-mellor-users: >-------------------------------------------------------------------- >> HS said: Also, the collaboration strikes me as more of a souped up >OCM. > >I can see this argument. I agree that a collaboration *diagram* is simply a >way of showing interaction patterns. However, the concept of a collaboration >is more than this. To quote the user guide again (P370): "In the UML, you >model mechanisms using collaborations. A collaboration gives its name to the >conceptual building blocks of your system" I've been following this thread somewhat. I believe that everyone has to keep a couple of basic facts about UML in mind: - UML is a superset of a variety of older notations, with a resulting combination of Analysis, Design, and Implementation concepts in a single collection of graphics and semantics. To effective apply UML, one must first select the subset that addresses their needs. From an Analysis perspective, interpreting the semantics of the collaboration diagram as a "souped up OCM" is not a bad starting point. The semantics of modeling "mechanisms" using collaborations are fully in the Design realm - not in Analysis. - The only thing "Unified" about UML is the graphical notation. While the evolution of model based software development has lead a many practitioners to concensus on a large body of semantics, there is wide divergence on many other (typically newer, or "fringe") concepts. You can reasonably expect some differences between virtually every author and "expert" on many seemingly basic concepts. Much of this can be traced to a common history of elaboration, and the intermixing of Analysis and Design. This is changing - but we're not "there" yet. Given the above points, those comfortable with Shlaer-Mellor OOA/RD and its separations - Analysis/Design, and by subject matter into domains - should look to UML first from the Analysis-only perspective. We've captured our view on what the "core" UML Analysis elements are in a paper available from the "Downloads" page of our web site (www.pathfindersol.com). It's called "Model-Based Software Engineering - An Overview of Rigorous and Effective Software Development using UML". To answer the assertion in the subject line: we believe that in Analysis Shlaer-Mellor objects map quite nicely to UML classes. Anyway, our advice for those trying out UML is the same for those enamored with the Swiss Army knife: don't try to use everything at the same time. _______________________________________________________ Pathfinder Solutions Inc. www.pathfindersol.com | 888-OOA-PATH | | effective solutions for software engineering challenges| | Peter Fontana voice: +01 508-384-1392 | peterf@pathfindersol.com fax: +01 508-384-7906 | _______________________________________________________| Subject: RE: (SMU) Objects aren't classes "Whipp David (ITC)" writes to shlaer-mellor-users: -------------------------------------------------------------------- Peter Fontana wrote: > From an Analysis perspective, interpreting the semantics of the > collaboration diagram as a "souped up OCM" is not a bad starting > point. Yes, collaboration diagrams are not the same thing as collaborations. Read chapter 27 of the UML User Guide: It shows how to model both static and dynamic aspects of a collaboration; but does not include any collaboration diagrams (It could, but the authors choose to use a sequence diagram instead). > The semantics of modeling "mechanisms" using collaborations > are fully in the Design realm - not in Analysis. I have to disagree here. I beleive that the architectural domains are in the realm of analysis, not design. To view "mechanisms" as purely design artifacts is to view the UML through elaboration-tinted glasses. You should also note that mechanisms are only one use of collaborations. > To answer the assertion in the subject line: we believe that > in Analysis Shlaer-Mellor objects map quite nicely to UML classes. I, too, beleive the SM objects map _quite_ nicely to UML classes. But looking at the definitions, Roles are an even nicer fit. From a philosophical viewpoint, if SM domains provide multiple viewpoints on a concept; then the viewpoints may be seen as roles and the concepts can be classes. Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp. San Jose, CA 95112 tel. (408) 501 6695. mailto:david.whipp@infineon.com Opinions are my own, factual statements may be in error Subject: Re: (SMU) Objects aren't classes lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > That's not really surprising. The books don't describe them very well. Their > definition does seem to match the object definition from object-lifecycles > (P12). SM defines a process by which objects may be translated to classes > (translation). I feel it may be easier to explain this process in a UML > context if we avoid uses classes as both a start-point and end-point for the > translation. My worry is about the 'role' aspect. That is only one of several classifications for and S-M object. > A package has very semantics beyond encapsulation. To quote the UML User > Guide: "You use packages to arrange your modeling elements into larger > chunks that you can manipulate as a group". The purpose of a subsystem is to > break down a large group of objects into manageably sized chunks. The two > definitions come from different perspectives, but the intent is the same: to > manage large diagrams. Yes, but groups are manipulated through interfaces. I think the issues are really scale, since packages can be small but domains tend to be complex, and the formalism of the interface. > Also, the collaboration strikes me as more of a souped up > OCM. > > I can see this argument. I agree that a collaboration *diagram* is simply a > way of showing interaction patterns. However, the concept of a collaboration > is more than this. To quote the user guide again (P370): "In the UML, you > model mechanisms using collaborations. A collaboration gives its name to the > conceptual building blocks of your system" > > On page 371, this is refined to "A collaboration is a society of classes, > interfaces, and other elements that work together to provide some > cooperative behavior that's bigger than the sum of all its parts." > > Finally, on page 372, we are told "Collaborations have two aspects: a > structural part ... and a behavioral part..." It seems clear that a > collaboration is more that the collaboration diagram. When I examine the > definition and intent of the collaboration (and even the symbol - an > ellipse), I see it as equivalent to domains. I see your point, but I am still bothered by the 'mechanism' bit in the first quote. One could just as well read this as applying to architectural or implementation mechanisms that describe the details of How object interact. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com 'archive.9905' -- Subject: (SMU) Need a SMOOA person "Levkoff, Bruce" writes to shlaer-mellor-users: -------------------------------------------------------------------- Hello, Sorry to invade the space with job-related stuff but... We have a need for a full-time software engineer versed in Shlaer-Mellor Object-Oriented Analysis for a medical device involving robotics and imaging in a multi-processor, multi-platform environment. This is a 6 month + assignement in Columbus, OH. The position will involve modeling and translation. If you are interested, please contact Bill Knox at Battelle: knoxw@battelle.org Regards, Bruce Subject: (SMU) Spanning the Globe: Shlaer-Mellor User Groups "Stephen J. Mellor" writes to shlaer-mellor-users: -------------------------------------------------------------------- Hello everyone, This message announces two, yes two, Shlaer-Mellor User Group meetings. One will be held in the UK and sponsored by Kennedy-Carter on September 14 and 15 and the other on the West Coast of the US in San Jose, sponsored by Project Technology on September 23 and 24. Please mark your calendars! It will be very worth your while to attend the meetings. Among the many activities, we shall announce a mapping from Shlaer-Mellor concepts into UML so that you can have the advantages of precise modeling and translation while using UML notation. We shall also report on progress with the action semantics work at the OMG. All this in addition to the usual workshops and presentations showcasing the latest and greatest. And most of all, you'll have a chance to meet other developers who take software engineering seriously and learn from them. Oh yes. The meetings will be fun too! The meetings are complementary, but for convenience, they will be run as separate entities. For information on the UK meeting, please mail smug@kc.com or call Tracy Morgan on +44 1483 483200. For information on the West Coast Meeting, mail steve@projtech.com with US SMUG as the subject line, or call +1-510-525-2756. Much more information will be available shortly. -- Steve Mellor and Allan Kennedy Subject: (SMU) Announcement: West Coast SMUG "Stephen J. Mellor" writes to shlaer-mellor-users: -------------------------------------------------------------------- ANNOUNCEMENT West Coast Shlaer-Mellor User Group The West Coast Shlaer-Mellor User Group will be held in San Jose, California, September 23-24, just before the Embedded Systems Conference, which starts on the 26th September (Sunday). Please mark your calendars! To help with our planning, please also send an e-mail to me with ATTENDANCE as the subject, stating whether you think you'll be able to attend _or not_. Please provide the number of people and a percentage likelihood, with 100% meaning you're certain you'll be there. The User Group will comprise two parts, presentations to the group as a whole, and working sessions for special topics that require discussion and _solutions_. If you have a topic on which you would like to present, backed up with a page or two of writing, or you would be willing to lead a working session on a special topic, please contact me, Steve Mellor, steve@projtech.com, as soon as possible. We'll be sending out much more information over the next few months, but first you need to mark those calendars! -- steve mellor Subject: Re: (SMU) Spanning the Globe: Shlaer-Mellor User Groups smf@cix.compulink.co.uk (Mike Finn) writes to shlaer-mellor-users: -------------------------------------------------------------------- > -- Steve Mellor and Allan Kennedy That's nice to see. :-) -- Mike Finn Dark Matter | Email: smf@cix.co.uk Systems Ltd | Voice: +44 (0) 1483 755145 Subject: (SMU) UK SMUG 99 Conference Tracy Morgan writes to shlaer-mellor-users: -------------------------------------------------------------------- ANNOUNCEMENT The 6th Annual UK SMUG Conference 14 - 15 September, New Forest, UK To follow up Steve Mellor's postings about this year's User Group Conferences, here is some information about the UK conference - SMUG 99. One of the primary aims of SMUG 99 is to provide a path through the chaos that will ensue from the ad hoc adoption of UML, by showing how OOA/RD provides the rigorous framework within which to exploit the emerging Rational Unified Process, The Unified Modelling Language and the UML Action Semantics. The SMUG 99 programme is designed to provide a mixture of current best practice, insight into future directions, and the opportunity to influence the method and tool developers. This year we have brought together the most impressive line-up of speakers ever, including the main man himself, Steve Mellor, and the dynamic duo Mike Lee and Leon Starr, the only team in the world who can extract entertainment value from domain partitioning! The presentations, edited highlights of which are outlined below, are delivered by the world¹s most experienced OOA/RD practitioners. SMUG 99 also offers a unique opportunity to access the insights of the authors of the three most highly regarded books on the Shlaer-Mellor Method. Steve Mellor and Ian Wilkie will bring you up to date with the action semantics which is being proposed to the OMG to be incorporated into the UML. Mike and Leon will share their extensive experience and insights into good and bad analysis practices - but promise to mention no delegate names. Colin Carter and John Wright will illustrate how Shlaer-Mellor development fits into the Rational Unified Process. Chris Raistrick and Neil Robson of BMW will demonstrate an environment for development of hard real-time embedded systems, using a tightly integrated set of specialised tools. There will be also be a range of presentations showing the latest developments and experiences in automatic code generation from organisation such as GCHQ, Marconi Communications and GEC Marconi to name but a few. Feedback from the previous five SMUGs shows that delegates really appreciate the chance to relax and interact with the other delegates on the first evening. This year, we have doubled our entertainment budget, so that, as well as the traditional barbecue and quiz, delegates will have the opportunity to try some paintballing, and attend a spy school. You just never know when these skills will be needed ! For more details, including a detailed conference programme and a booking form, please visit our web site at http://www.kc.com. We're confident that this year's UK SMUG Conference will be the best ever ! I hope you can make it Tracy *************************************************************************** ***** BOOK YOUR PLACE AT SMUG 99 NOW! The Sixth Annual SM-OOA User Group Conference is on 14 -15 September, in the beautiful New Forest, Dorset ,UK. Please visit our web-site for more information. *************************************************************************** ***** Tracy Morgan tel : +44 1483 483200 Kennedy Carter Ltd fax : +44 1483 483201 14 The Pines web : http://www.kc.com Broad Street email : tracy@kc.com Guildford GU3 3BH UK "We may not be Rational but we are Intelligent" *************************************************************************** ***** Subject: (SMU) SMUG 99 UK Conference Tracy Morgan writes to shlaer-mellor-users: -------------------------------------------------------------------- ANNOUNCEMENT The 6th Annual UK SMUG Conference 14 - 15 September, New Forest, UK To follow up Steve Mellor's postings about this year's User Group Conferences, here is some information about the UK conference - SMUG 99. One of the primary aims of SMUG 99 is to provide a path through the chaos that will ensue from the ad hoc adoption of UML, by showing how OOA/RD provides the rigorous framework within which to exploit the emerging Rational Unified Process, The Unified Modelling Language and the UML Action Semantics. The SMUG 99 programme is designed to provide a mixture of current best practice, insight into future directions, and the opportunity to influence the method and tool developers. This year we have brought together the most impressive line-up of speakers ever, including the main man himself, Steve Mellor, and the dynamic duo Mike Lee and Leon Starr, the only team in the world who can extract entertainment value from domain partitioning! The presentations, edited highlights of which are outlined below, are delivered by the world¹s most experienced OOA/RD practitioners. SMUG 99 also offers a unique opportunity to access the insights of the authors of the three most highly regarded books on the Shlaer-Mellor Method. Steve Mellor and Ian Wilkie will bring you up to date with the action semantics which is being proposed to the OMG to be incorporated into the UML. Mike and Leon will share their extensive experience and insights into good and bad analysis practices - but promise to mention no delegate names. Colin Carter and John Wright will illustrate how Shlaer-Mellor development fits into the Rational Unified Process. Chris Raistrick and Neil Robson of BMW will demonstrate an environment for development of hard real-time embedded systems, using a tightly integrated set of specialised tools. There will be also be a range of presentations showing the latest developments and experiences in automatic code generation from organisation such as GCHQ, Marconi Communications and GEC Marconi to name but a few. Feedback from the previous five SMUGs shows that delegates really appreciate the chance to relax and interact with the other delegates on the first evening. This year, we have doubled our entertainment budget, so that, as well as the traditional barbecue and quiz, delegates will have the opportunity to try some paintballing, and attend a spy school. You just never know when these skills will be needed ! For more details, including a detailed conference programme and a booking form, please visit our web site at http://www.kc.com. We're confident that this year's UK SMUG Conference will be the best ever ! I hope you can make it Tracy *************************************************************************** ***** BOOK YOUR PLACE AT SMUG 99 NOW! The Sixth Annual SM-OOA User Group Conference is on 14 -15 September, in the beautiful New Forest, Dorset ,UK. Please visit our web-site for more information. *************************************************************************** ***** Tracy Morgan tel : +44 1483 483200 Kennedy Carter Ltd fax : +44 1483 483201 14 The Pines web : http://www.kc.com Broad Street email : tracy@kc.com Guildford GU3 3BH UK "We may not be Rational but we are Intelligent" *************************************************************************** ***** 'archive.9906' -- 'archive.9907' -- Subject: (SMU) It's all gone quiet... Tristan Pye writes to shlaer-mellor-users: -------------------------------------------------------------------- Have I dropped off the list, or has it all been quiet for the last month or so...? Anyway, I thought I'd play Devil's Advocate and try to kick start a discussion, so here goes... Why does SM need to formalise relationships using referential attributes? I haven't seen any action language except for BridgePoint's, so I don't know if this is globally true, but BPAL won't let me set referential attributes directly - they are implied by linking objects together across a relationship, and relationships are navigated by specifiying the relationship name, not by any values on ref attributes. A valid implementation would be to use, say, pointers to represent relationships between instances as opposed to a relational database model, so why does the analysis force us down this route? I can think of a couple (or so) of examples of when ref attributes would be needed (eg Collapsed Referentials) but why can't we just use them when they are needed? I won't mention my other examples for now... I'll see if they come up in discussion! Just curious.... Tristan. Subject: Re: (SMU) It's all gone quiet... lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Pye... > Why does SM need to formalise relationships using referential attributes? I > haven't seen any action language except for BridgePoint's, so I don't know > if this is globally true, but BPAL won't let me set referential attributes > directly - they are implied by linking objects together across a > relationship, and relationships are navigated by specifiying the > relationship name, not by any values on ref attributes. A valid > implementation would be to use, say, pointers to represent relationships > between instances as opposed to a relational database model, so why does the > analysis force us down this route? I think there are several different answers for this. For one thing, action languages represent the dynamic portion of the model while the referential attributes represent the static portion. Since the static and dynamic portions must both deal with referential integrity, the presence of referential attributes in the static model is necessary. If you use the ADFDs rather than an action language, then you really have no choice since the ADFDs force a data store paradigm. And the ADFDs came before the action languages. B-) I believe the second point just demonstrates the more general idea that referential attributes provide a very general mechanism for resolving references in an information model. It is sufficiently general that it can be mapped into a variety of specific navigation techniques, one of which is navigation by relationship and instance reference as in an action language. An even more specific implementation in the architecture would be pointers. A key point is that referential attributes -- like all attributes -- are an abstraction in themselves that do not necessarily have to be implemented as a conventional data store. They are simply a notational artifact that allows one to keep track of instances in a particular diagram. In my view the action languages represent a somewhat less generic approach to relationship navigation that is somewhat closer to typical implementations in a computing environment. > I can think of a couple (or so) of examples of when ref attributes would be > needed (eg Collapsed Referentials) but why can't we just use them when they > are needed? In a word: consistency. If a notation is to be simple it can't have a lot of exceptions. If a notation is to be easy to use one should express common things in exactly the same way each time. If a notation is to be rigorous, it should minimize the number of judgments to be made about the way concepts should be expressed (there are enough judgments in figuring out what concepts need to be expressed). [Go on OTUG and ask what the difference is between an aggregation relationship and a composition relationship. Then settle back for a couple of week's worth of exchanges as the UML gurus argue about it. One has to wonder about a notation where they keep providing lengthy quotes to each other from different parts of the reference manual and still can't agree on exactly when each type should be used. But I digress...] -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) It's all gone quiet... peterf@pathfindersol.com (Peter J. Fontana) writes to shlaer-mellor-users: -------------------------------------------------------------------- At 08:32 AM 7/23/99 -0400, shlaer-mellor-users@projtech.com wrote: >lahman writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >Responding to Pye... > >> Why does SM need to formalise relationships using referential attributes? I >> haven't seen any action language except for BridgePoint's, so I don't know >> if this is globally true, but BPAL won't let me set referential attributes >> directly - they are implied by linking objects together across a >> relationship, and relationships are navigated by specifiying the >> relationship name, not by any values on ref attributes. A valid >> implementation would be to use, say, pointers to represent relationships >> between instances as opposed to a relational database model, so why does the >> analysis force us down this route? > >I think there are several different answers for this. For one thing, action >languages represent the dynamic portion of the model while the referential >attributes represent the static portion. Since the static and dynamic portions >must both deal with referential integrity, the presence of referential >attributes in the static model is necessary. > >If you use the ADFDs rather than an action language, then you really have no >choice since the ADFDs force a data store paradigm. And the ADFDs came before >the action languages. B-) You say that implying that ADFDs are a fully and unambigously defined form of Process Modeling. As the only company that ever offered fully executable Process Models from ADFDs, Pathfinder can say that this is not the case. Even with the huge step forward from OOA91 to OOA96, we still had to "clarify" (invent) certain details to make it all work. So what this means is there is ample precedent for you to define that some data flows carry references to instances (instead of flowing the id attributes), and you can define Link and Unlink processes that would take two input flows of instance references. >A key point is that referential attributes -- like all attributes -- are an >abstraction in themselves ... I argue that the Association itself is the abstraction that you should be using - not the associative attribute. In our consulting work, we find many OOA modelers neglect their relationships - rationalizing that they don't need to invest the effort in role phrases and descriptions because the associative attribute carries the info. I say this info is in the wrong place - the relationship should have it. >that do not necessarily have to be implemented as a >conventional data store. They are simply a notational artifact that allows one >to keep track of instances in a particular diagram. In my view the action >languages represent a somewhat less generic approach to relationship navigation >that is somewhat closer to typical implementations in a computing environment. > >> I can think of a couple (or so) of examples of when ref attributes would be >> needed (eg Collapsed Referentials) but why can't we just use them when they >> are needed? > >In a word: consistency. Agreed. _______________________________________________________ Pathfinder Solutions Inc. www.pathfindersol.com | 888-OOA-PATH | | effective solutions for software engineering challenges| | Peter Fontana voice: +01 508-384-1392 | peterf@pathfindersol.com fax: +01 508-384-7906 | _______________________________________________________| Subject: Re: (SMU) It's all gone quiet... lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Fontana... > >If you use the ADFDs rather than an action language, then you really have no > >choice since the ADFDs force a data store paradigm. And the ADFDs came before > >the action languages. B-) > > You say that implying that ADFDs are a fully and unambigously defined form > of Process Modeling. As the only company that ever offered fully executable > Process Models from ADFDs, Pathfinder can say that this is not the case. > Even with the huge step forward from OOA91 to OOA96, we still had to > "clarify" (invent) certain details to make it all work. > > So what this means is there is ample precedent for you to define that some > data flows carry references to instances (instead of flowing the id > attributes), and you can define Link and Unlink processes that would take > two input flows of instance references. I think you are reading too much into what I said. My point was merely that if you use ADFDs, as described in S-M, the ADFDs go to data stores for the referential IDs to put on processes. This is one way to navigate without references, links or unlinks. We did that for years before OOA96 when we were manually generating code. The translation rules might have been somewhat trickier than they would have been with a more precise definition, but it was still do-able. > >A key point is that referential attributes -- like all attributes -- are an > >abstraction in themselves ... > > I argue that the Association itself is the abstraction that you should be > using - not the associative attribute. In our consulting work, we find many > OOA modelers neglect their relationships - rationalizing that they don't > need to invest the effort in role phrases and descriptions because the > associative attribute carries the info. I say this info is in the wrong > place - the relationship should have it. I agree with you that the important thing is the relationship and it certainly needs to be described properly. But I think that is a separate issue for proper training. You still need something unambiguous in the notation for the static model to identify the instances involved. Language descriptions won't cut it unless the language is highly restricted, in which case it is simply a notation artifact. So it comes down to where one places that additional information. I am sure it could be attached to the relationships themselves. But the referential attributes are a pretty efficient notation to capture the idea of referential integrity and I suspect whatever you put on the relationship will look a lot like them. While associating them with the relationship might have some marginal value in highlighting the relationship's importance, I would argue that is offset by the fact that referential attributes are comfortable for many people already familiar with ERD practice and the relational data model. If a familiar notation works efficiently, as I think the IM does, then why change it? One of the things that I like about S-M is that you don't have to spend a lot of time explaining the notation so you can focus on the important part about the discipline around the notations -- which includes educating people about the importance of properly defining relationships. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: STD Notation and BridgePoint(Was: (SMU) It's all gone quiet...) "Leslie A. Munday" writes to shlaer-mellor-users: -------------------------------------------------------------------- -- On Fri, 23 Jul 1999 08:32:29 lahman wrote: >lahman writes to shlaer-mellor-users: >-------------------------------------------------------------------- > > >In a word: consistency. If a notation is to be simple it can't have a lot of >exceptions. If a notation is to be easy to use one should express common >things in exactly the same way each time. If a notation is to be rigorous, it >should minimize the number of judgments to be made about the way concepts should >be expressed (there are enough judgments in figuring out what concepts need to >be expressed). [Go on OTUG and ask what the difference is between an >aggregation relationship and a composition relationship. Then settle back for a >couple of week's worth of exchanges as the UML gurus argue about it. One has to >wonder about a notation where they keep providing lengthy quotes to each other >from different parts of the reference manual and still can't agree on exactly >when each type should be used. But I digress...] > Sorry, what notation was that notation again? :-) Actually I've had a question that I wanted to pose to the list for a long time. It's to do with the STD notation and BridgePoint. [This may also be pertinent to other tools, but BridgePoint is the one with which I am familiar.] The Shlaer/Mellor method uses the Moore notation. This is where actions are placed within states. The alternative notation is Mealy, where actions are placed on transitions. >From my W/M days, I considered states to be interruptable and transitions to be non-interruptable. An event entering a state which causes a transition will interrupt any processing that is happening in that state and cause the object to transition to the desired state. An event occuring while in a transition will have no effect, because transitions are non-interruptable, but can be queued until the transition is complete. If using the Moore notation this means that any valid event will interrupt the action processing within a state and kick the object into the appropriate next state. If using the Mealy notation this means that actions will always complete and it is not until the transition enters the next state that the list of events can be examined. This is similar to how the simulator works in BridgePoint. Actions are executed to completion and it is only at completion that events are examined. So my question is, why is BridgePoint using the Moore notation when the simulator acts like it is using the Mealy notation. If anyone's interested, my personal preference is the Moore notation where actions are interruptable, because they are contained within states. This allows an object to exit a state at the appropriate time by sending an event to itself. A very useful feature not supported by the BridgePoint simulator. Regards, Leslie. __________________________________________________________________ Get your own free England E-mail address at http://www.england.com Subject: (SMU) SMUG US "Stephen J. Mellor" writes to shlaer-mellor-users: -------------------------------------------------------------------- The First Annual US Shlaer-Mellor User Group -------------------------------------------- The First Annual US Shlaer-Mellor User Group is scheduled for September 23-24 in San Jose California, just before the Embedded Systems Conference. The focus of this year's conference is "News You Can Use" and the program reflects that. We plan presentations and intensive workshops on: * Shlaer-Mellor in UML * Test Vector Generation * Interfacing with existing base classes * Bridges and How to Minimize Domain Coupling * Inheritance--Various Forms and Uses * Action Semantics * HW/SW Co-design * Bridge Patterns * GUI Translation and Development * Domain Partitioning: The Good, The Bad, and The Ugly * Metrics among others. Take a look at the program, and, if you like it, book online. The address is http://www.projtech.com Please pass on this message to interested parties. See you there! -- steve mellor PS The program is still in development. If you have some News You Can Use to share with others, contact me STP. Subject: Re: (SMU) It's all gone quiet... smf@cix.compulink.co.uk (Mike Finn) writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Tristan Pye... > Have I dropped off the list, or has it all been quiet for the last month or > so...? If you don't already know... There's been quite a bit of interesting S-M discussion on Rational's OTUG mailing list - thanks to our very own Lahman, Mellor and Munday. BTW, will you be at the Aerosystems open day in Reading on Thursday? -- Mike Finn Dark Matter | Email: smf@cix.co.uk Systems Ltd | Voice: +44 (0) 1483 755145 Subject: Re: STD Notation and BridgePoint(Was: (SMU) It's all gone "David Harris" writes to shlaer-mellor-users: -------------------------------------------------------------------- Leslie A. Munday wrote: > "Leslie A. Munday" writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > -- > > On Fri, 23 Jul 1999 08:32:29 lahman wrote: > >lahman writes to shlaer-mellor-users: > >-------------------------------------------------------------------- > > > > > >In a word: consistency. If a notation is to be simple it can't have a lot of > >exceptions. If a notation is to be easy to use one should express common > >things in exactly the same way each time. If a notation is to be rigorous, it > >should minimize the number of judgments to be made about the way concepts should > >be expressed (there are enough judgments in figuring out what concepts need to > >be expressed). [Go on OTUG and ask what the difference is between an > >aggregation relationship and a composition relationship. Then settle back for a > >couple of week's worth of exchanges as the UML gurus argue about it. One has to > >wonder about a notation where they keep providing lengthy quotes to each other > >from different parts of the reference manual and still can't agree on exactly > >when each type should be used. But I digress...] > > > > Sorry, what notation was that notation again? :-) > > Actually I've had a question that I wanted to pose to the list for a long time. > > It's to do with the STD notation and BridgePoint. [This may also be pertinent to other tools, but BridgePoint is the one with which I am familiar.] > > The Shlaer/Mellor method uses the Moore notation. This is where actions are placed within states. > > The alternative notation is Mealy, where actions are placed on transitions. > > >From my W/M days, I considered states to be interruptable and transitions to be non-interruptable. > > An event entering a state which causes a transition will interrupt any processing that is happening in that state and cause the object to transition to the desired state. > > An event occuring while in a transition will have no effect, because transitions are non-interruptable, but can be queued until the transition is complete. > > If using the Moore notation this means that any valid event will interrupt the action processing within a state and kick the object into the appropriate next state. > > If using the Mealy notation this means that actions will always complete and it is not until the transition enters the next state that the list of events can be examined. > > This is similar to how the simulator works in BridgePoint. Actions are executed to completion and it is only at completion that events are examined. > > So my question is, why is BridgePoint using the Moore notation when the simulator acts like it is using the Mealy notation. > If anyone's interested, my personal preference is the Moore notation where actions are interruptable, because they are contained within states. > > This allows an object to exit a state at the appropriate time by sending an event to itself. A very useful feature not supported by the BridgePoint simulator. > > Regards, > > Leslie. > > __________________________________________________________________ > Get your own free England E-mail address at http://www.england.com OOA91 defines a state's action as performed on entry entry to the state and having to complete before another event can be received by that instance's state machine. This effectively means that the timing of the action is the same for both a Mealy and Moore machine, i.e. the transition and action occur as a single item for a single instance's state machine. The differences being that the Mealy machine does not enforce the same action to be pereformed on all transitions to a particular state or that all events to a particular state carry the same event data. This would appear to allow greater flexibility in the state machine. Thus when simulating a Mealy or Moore state machine they would appear to act in the same way. The question then becomes does Shlaer/Mellor use a true Moore machine? With regard to preferences for a Moore or Mealy machine having used both I am not that sure which I prefer.The first tool that I used for Shlaer/Mellor development only supported the Mealy machine and when we moved to a dedicated Shlaer/Mellor tool which enforced the use of the Moore notation it was found that the result was that the state machines had a greater number of states. However I would say that trying to document actions on transitions also lead to rather cluttered diagrams. With regard to your preference for the Moore machine, I have been using Kennedy-Carter's I-OOA tool for the past four years and the effect you require is possible as an event generated by an instance to itself is placed on the front of the event queue not the back. Thus by arranging the state action to generate the event and finish processing the desired action can be achieved. As an aside, I had thought that a state's action was considered to be an atomic unit, but having reread OOA91 it would appear that an action can be interrupted to perform an action in another instances state machine, but that the interrrupted action will always complete before another event was processed by the instances state machine. I have never seen this sort of behaviour supported, the tools I have used have always treated an action as a single atomic unit. Regards Dave Harris Subject: Re: STD Notation and BridgePoint(Was: (SMU) It's all gone lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Munday... > An event entering a state which causes a transition will interrupt any processing that is happening in that state and cause the object to transition to the desired state. > > An event occuring while in a transition will have no effect, because transitions are non-interruptable, but can be queued until the transition is complete. > > If using the Moore notation this means that any valid event will interrupt the action processing within a state and kick the object into the appropriate next state. > > If using the Mealy notation this means that actions will always complete and it is not until the transition enters the next state that the list of events can be examined. > > This is similar to how the simulator works in BridgePoint. Actions are executed to completion and it is only at completion that events are examined. > > So my question is, why is BridgePoint using the Moore notation when the simulator acts like it is using the Mealy notation. It is more than BridgePoint. S-M itself decrees that, "Only one action of a given state machine can be in execution at any point in time. Once initiated, the action must complete before another event can be received by this instance's state machine" [OL:MWS pg 47] Thus neither the transition nor the state is interruptable. I would guess that what you are thinking about is the simultaneous view of time where state machines of different instances can be executing at the same time. In this situation the architecture must ensure data access integrity (possibly at the process level), not the state machine model. To ensure this the architecture may introduce some locking scheme on the data stores that may pause an action's execution. This interruption, though, does not depend upon a new event to the instance's state machine being processed. Given the S-M restriction, it is difficult to come up with a situation where it would really matter whether the model were Mealy or Moore. The only way it seems to matter is that to convert a Moore machine to a Mealy machine one usually has to add more states because Moore requires the same data packet on all incoming events. > If anyone's interested, my personal preference is the Moore notation where actions are interruptable, because they are contained within states. > > This allows an object to exit a state at the appropriate time by sending an event to itself. A very useful feature not supported by the BridgePoint simulator. This is supported in OO96. Self directed events must be given priority over external events, so a state machine can introduce an event to shift itself to a state to properly accept the next external event. I thought most of the tools were up to date on this. Maybe you've been using Rose too long. B-) -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) STD Notation and BridgePoint lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Harris... > OOA91 defines a state's action as performed on entry entry to the state and having to complete before another event can be received by that instance's > state machine. This effectively means that the timing of the action is the same for both a Mealy and Moore machine, i.e. the transition and action occur > as a single item for a single instance's state machine. The differences being that the Mealy machine does not enforce the same action to be pereformed > on all transitions to a particular state or that all events to a particular state carry the same event data. This would appear to allow greater flexibility > in the state machine. Thus when simulating a Mealy or Moore state machine they would appear to act in the same way. The question then becomes > does Shlaer/Mellor use a true Moore machine? I believe that it does. Mealy is a more general case where output is determined by both input and state, where Moore is determined only by state. In either case, I believe the S-M restriction that only one action can be executed at a time in a given state machine would apply (i.e., it is a characteristic of finite state automata). However, I hesitate to count the moons that have passed since I looked at finite state automata -- I am taking your and Munday's word for which model is which! B-) > With regard to preferences for a Moore or Mealy machine having used both I am not that sure which I prefer.The first tool that I used for Shlaer/Mellor development only > supported the Mealy machine and when we moved to a dedicated Shlaer/Mellor tool which enforced the use of the Moore notation > it was found that the result was that the state machines had a greater number of states. This puzzles me. I could have sworn one got more states going from Moore to Mealy than vice versa. So I looked it up in my less-than-eloquent pocket dictionary of computing terms, which says that one can always go from Moore to Mealy by adding states. So far so good. But now that I think about it, it seems like the reverse would be true -- if one had different data packets on the events in a Mealy model, then one would need multiple states in the Moore model to accommodate them. Are we sure Moore is action-on-state and Mealy is action-on-transition? My brain is beginning to hurt. > However I would say that trying to document actions on > transitions also lead to rather cluttered diagrams. > > With regard to your preference for the Moore machine, I have been using Kennedy-Carter's I-OOA tool for the past four years and the effect you > require is possible as an event generated by an instance to itself is placed on the front of the event queue not the back. Thus by arranging the state > action to generate the event and finish processing the desired action can be achieved. I think most of the tools now support OOA96 where self directed events must have priority. > As an aside, I had thought that a state's action was considered to be an atomic unit, but having reread OOA91 it would appear that an action can be > interrupted to perform an action in another instances state machine, but that the interrrupted action will always complete before another event was > processed by the instances state machine. I have never seen this sort of behaviour supported, the tools I have used have always treated an action as > a single atomic unit. As I indicated in my response to Munday, I think this is more of an architecture issue. If one allows multiple state machines to execute simultaneously, they would still have to obey the one-machine-one-action-running rule. However, the data accesses are a whole other story. I believe that S-M has generally supported the idea that individual actions could be put into a wait or paused state to support mechanisms to ensure data access integrity. If that is the case, then the atomic unit of processing is the ADFD process. But only insofar as pausing is concerned; not re-entering. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: RE: (SMU) STD Notation and BridgePoint "Paul Higham" writes to shlaer-mellor-users: -------------------------------------------------------------------- I suggest a good reference for this topic is the book by John Hopcroft and Jeffrey Ullman called "Introduction to Automata Theory, Languages and Computation" . In the 1979 edition (Mr. Lahman is not the only one who prefers to forget the ravages of time since last this was studied!) on page 44 there are two theorems asserting the equivalence of Moore and Mealy machines. Briefly the first theorem states that for any Moore machine there is an equivalent Mealy machine with the same state set; the second states that for each Mealy machine there is an equivalent Moore machine but with a much bigger state set (namely the cartesian product of the Mealy machine's state set and its output alphabet - whatever that is in this context!) By the way, the mnemonic that I use to distinguish where the action is associated is the second letter of the name: Moore's second letter is an 'o' which is shaped like a node, i.e., a state, whereas the second letter of Mealy is 'e' which stands for 'edge', i.e., a transition. paul higham ESN: 852 7915 -----Original Message----- From: lahman [SMTP:lahman@ATB.Teradyne.COM] Sent: Tuesday, July 27, 1999 3:14 PM To: shlaer-mellor-users@projtech.com Subject: Re: (SMU) STD Notation and BridgePoint lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Harris... > OOA91 defines a state's action as performed on entry entry to the state and having to complete before another event can be received by that instance's > state machine. This effectively means that the timing of the action is the same for both a Mealy and Moore machine, i.e. the transition and action occur > as a single item for a single instance's state machine. The differences being that the Mealy machine does not enforce the same action to be pereformed > on all transitions to a particular state or that all events to a particular state carry the same event data. This would appear to allow greater flexibility > in the state machine. Thus when simulating a Mealy or Moore state machine they would appear to act in the same way. The question then becomes > does Shlaer/Mellor use a true Moore machine? I believe that it does. Mealy is a more general case where output is determined by both input and state, where Moore is determined only by state. In either case, I believe the S-M restriction that only one action can be executed at a time in a given state machine would apply (i.e., it is a characteristic of finite state automata). However, I hesitate to count the moons that have passed since I looked at finite state automata -- I am taking your and Munday's word for which model is which! B-) > With regard to preferences for a Moore or Mealy machine having used both I am not that sure which I prefer.The first tool that I used for Shlaer/Mellor development only > supported the Mealy machine and when we moved to a dedicated Shlaer/Mellor tool which enforced the use of the Moore notation > it was found that the result was that the state machines had a greater number of states. This puzzles me. I could have sworn one got more states going from Moore to Mealy than vice versa. So I looked it up in my less-than-eloquent pocket dictionary of computing terms, which says that one can always go from Moore to Mealy by adding states. So far so good. But now that I think about it, it seems like the reverse would be true -- if one had different data packets on the events in a Mealy model, then one would need multiple states in the Moore model to accommodate them. Are we sure Moore is action-on-state and Mealy is action-on-transition? My brain is beginning to hurt. > However I would say that trying to document actions on > transitions also lead to rather cluttered diagrams. > > With regard to your preference for the Moore machine, I have been using Kennedy-Carter's I-OOA tool for the past four years and the effect you > require is possible as an event generated by an instance to itself is placed on the front of the event queue not the back. Thus by arranging the state > action to generate the event and finish processing the desired action can be achieved. I think most of the tools now support OOA96 where self directed events must have priority. > As an aside, I had thought that a state's action was considered to be an atomic unit, but having reread OOA91 it would appear that an action can be > interrupted to perform an action in another instances state machine, but that the interrrupted action will always complete before another event was > processed by the instances state machine. I have never seen this sort of behaviour supported, the tools I have used have always treated an action as > a single atomic unit. As I indicated in my response to Munday, I think this is more of an architecture issue. If one allows multiple state machines to execute simultaneously, they would still have to obey the one-machine-one-action-running rule. However, the data accesses are a whole other story. I believe that S-M has generally supported the idea that individual actions could be put into a wait or paused state to support mechanisms to ensure data access integrity. If that is the case, then the atomic unit of processing is the ADFD process. But only insofar as pausing is concerned; not re-entering. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: RE: (SMU) STD Notation and BridgePoint "Lynch, Chris D. SDX" writes to shlaer-mellor-users: -------------------------------------------------------------------- >This puzzles me. I could have sworn one got more states going from Moore to Mealy than vice versa. >So I looked it up in my less-than-eloquent pocket dictionary of computing >terms, which says that one can always go from Moore to Mealy by adding states. So far so good. >But now that I think about it, it seems like the reverse would be true -- if >one had different data packets on the events in a Mealy model, then one would need multiple states in >the Moore model to accommodate them. Are we sure Moore is action-on-state >and Mealy is action-on-transition? My brain is beginning to hurt. Yes, Moore machines have the actions on the states, both in classical and SMOOA automata. Mealy-to-Moore can create a state explosion, whereas going the other way creates an extra state or two (with the rote conversion.) It often happens with Moore machines that most self-directed events are the type that say, "Done", and that these are the only events receivable by the particular states which generate them. These should be thought of as transition actions, giving essentially a mixed Mealy-Moore machine. Of course things are simpler if your vendor supports that explicitly (some do). -Chris Lynch Abbott AIS San Diego, CA Subject: RE: (SMU) STD Notation and BridgePoint David.Whipp@smi.siemens.com writes to shlaer-mellor-users: -------------------------------------------------------------------- > lahman wrote > This puzzles me. I could have sworn one got more states > going from Moore to Mealy than vice versa. So I looked it up > in my less-than-eloquent pocket dictionary of computing > terms, which says that one can always go from Moore to Mealy > by adding states. So far so good. But now that I think > about it, it seems like the reverse would be true -- if > one had different data packets on the events in a Mealy > model, then one would need multiple states in the Moore model > to accommodate them. Are we sure Moore is action-on-state > and Mealy is action-on-transition? My brain is beginning to hurt. Just consider the number of unique actions, and the minimum number of states needed to support them. In a Moore machine, for N unique actions, you need N states. In a mealy machine, for N unique actions, you need only 1 state (with N transitions). This does not mean that the mealy machine is better. Becuase you put actions on transitions, you will need to duplicate actions on many transitions entering a state. A pathological example would be to convert a Moore to a Mealy by simply copying each state's action to the input transitions of the state (if we were doing this in hardware, we'd want to add a null trasition to hold the action on each clock cycle until the next event). So there's a trade off. In Mealy, you get extra actions; in Moore you get extra states. My personal opinion is that I want to be able to combine Moore and Mealy to allow me to produce an elegant description of a system's behaviour. At the very least, I'd want "on-exit" actions for a Moore machine. State models is one area in which I prefer a fuller maping of UML. A further point is the "do" keyword in UML. When a machine is in a state, is should be able to do something that is dependent on that state. An SM model often requires extra events, transitions and states to emulate the concept that is embodied in the "do" action. It can be thought of as placing an observer on the data stores that are used within the do action. > As I indicated in my response to Munday, I think this is more > of an architecture issue. If one allows multiple state > machines to execute simultaneously, they would still have > to obey the one-machine-one-action-running rule. However, > the data accesses are a whole other story. I believe that > S-M has generally supported the idea that individual > actions could be put into a wait or paused state to support > mechanisms to ensure data access integrity. If that is the > case, then the atomic unit of processing is the ADFD > process. But only insofar as pausing is concerned; not re-entering. My interpretation of interruptability is simple. The architecture can do whatever it wants: interrupts, pausing, interleaving, etc. The sole requirement is that the externally visible behaviour of the system is a subset with the behaviour that would be possible if state actions are not interruptable. Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: Re: (SMU) STD Notation and BridgePoint "Sean P. De Merchant" writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > > > With regard to preferences for a Moore or Mealy machine having > > used both I am not that sure which I prefer.The first tool that I used > > for Shlaer/Mellor development only > > supported the Mealy machine and when we moved to a dedicated > > Shlaer/Mellor tool which enforced the use of the Moore notation > > it was found that the result was that the state machines had a > > greater number of states. > > This puzzles me. I could have sworn one got more states going from > Moore to Mealy than vice versa. So I looked it up in my less-than-eloquent > pocket dictionary of computing terms, which says that one can always > go from Moore to Mealy by adding states. So far so good. But now that > I think about it, it seems like the reverse would be true -- if > one had different data packets on the events in a Mealy model, > then one would need multiple states in the Moore model to accommodate > them. Are we sure Moore is action-on-state > and Mealy is action-on-transition? My brain is beginning to hurt. The critical point of your statement is "that one _can_ always go from Moore to Mealy by adding states." What this probably means is someone found a proof that a Moore model can _always_ be turned into a Meally model by adding additional states. This does not mean that one must _necessarily_ add states to go from a Moore to a Mealy model. enjoy, Sean Subject: Re: STD Notation and BridgePoint(Was: (SMU) It's all gone "Leslie A. Munday" writes to shlaer-mellor-users: -------------------------------------------------------------------- -- On Tue, 27 Jul 1999 09:59:07 David Harris wrote: >"David Harris" writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >Leslie A. Munday wrote: > >> "Leslie A. Munday" writes to shlaer-mellor-users: >> -------------------------------------------------------------------- >> >> -- >> >> On Fri, 23 Jul 1999 08:32:29 lahman wrote: >> >lahman writes to shlaer-mellor-users: >> >-------------------------------------------------------------------- >> > >> > >With regard to your preference for the Moore machine, I have been using Kennedy-Carter's I-OOA tool for the past four years and the effect you >require is possible as an event generated by an instance to itself is placed on the front of the event queue not the back. Thus by arranging the state >action to generate the event and finish processing the desired action can be achieved. > >As an aside, I had thought that a state's action was considered to be an atomic unit, but having reread OOA91 it would appear that an action can be >interrupted to perform an action in another instances state machine, but that the interrrupted action will always complete before another event was >processed by the instances state machine. I have never seen this sort of behaviour supported, the tools I have used have always treated an action as >a single atomic unit. > I actually came up with this question over the last 18 months, since it is 18 months ago that I last used BridgePoint. At the time I remember building states with some quite heavy processing in them which got quite nested at times. When I finally resolved the thing that I was looking for I found myself having to exit cleanly from all these nests. It would have been so much easier for the state to send an event to itself. I often thought that the reason for this might be due to the limitations of ASL over ADFDs. Is there any progress on SMALL or any new SM publications for that matter? Leslie. __________________________________________________________________ Get your own free England E-mail address at http://www.england.com Subject: Re: (SMU) STD Notation and BridgePoint "Leslie A. Munday" writes to shlaer-mellor-users: -------------------------------------------------------------------- Just one thing that you mentioned a couple of times in your reply: That is the 'one state one action' rule. When I used to draw ADFDs, it was possible to branch to two bubbles simultaneously from a single input. I found this is necessary, (and unfortunately not supported by any S/M tools) if you wish to state what an action does without specifying how it does it. I.e. in order to satisfy a state process I must determine A & B, but it doesn't matter what order they are done in. Using ADFDs you show two threads through the diagram meeting at the bubble which generates the exiting event. It looks like two actions are occuring in parallel, but I'm going to argue that they're not, (it just hasn't been speciifed which is done first) because otherwise I could be accused of creating concurrent states by OTUG lurkers. Leslie. -- On Tue, 27 Jul 1999 15:14:21 lahman wrote: >lahman writes to shlaer-mellor-users: >As I indicated in my response to Munday, I think this is more of an architecture issue. If one allows multiple state machines to execute simultaneously, they would still have >to obey the one-machine-one-action-running rule. However, the data accesses are a whole other story. I believe that S-M has generally supported the idea that individual >actions could be put into a wait or paused state to support mechanisms to ensure data access integrity. If that is the case, then the atomic unit of processing is the ADFD >process. But only insofar as pausing is concerned; not re-entering. > >-- >H. S. Lahman There is nothing wrong with me that >Teradyne/ATD could not be cured by a capful of Drano >MS NR22-11 >600 Riverpark Dr. >N. Reading, MA 01864 >(Tel) (978)-370-1842 >(Fax) (978)-370-1100 >lahman@atb.teradyne.com > > > __________________________________________________________________ Get your own free England E-mail address at http://www.england.com Subject: Re: (SMU) STD Notation and BridgePoint "David Harris" writes to shlaer-mellor-users: -------------------------------------------------------------------- Leslie A. Munday wrote: > "Leslie A. Munday" writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Just one thing that you mentioned a couple of times in your reply: > > That is the 'one state one action' rule. It is not so much that each state has a single action, but that all processing in a state is performed as a single un-interruptable unit. > > > When I used to draw ADFDs, it was possible to branch to two bubbles simultaneously from a single input. > > I found this is necessary, (and unfortunately not supported by any S/M tools) if you wish to state what an action does without specifying how it does it. > > I.e. in order to satisfy a state process I must determine A & B, but it doesn't matter what order they are done in. > > Using ADFDs you show two threads through the diagram meeting at the bubble which generates the exiting event. As long as no state changes are involved this apparent parallel processing is not a problem. It is a criticism that has been laid against most action languages that process ordering is un-necessarily defined. I would argue that what you were doing was more correct, than what the action languages force us to do. An analyst should state what has to be done, the design (translation) stage should then implement this in the most efficient way. Indeed if an action language with semantics similar to Z could be used then no un-necessary ordering need be specified. > > > It looks like two actions are occuring in parallel, but I'm going to argue that they're not, (it just hasn't been speciifed which is done first) because otherwise I could be accused of creating concurrent states by OTUG lurkers. > If the two processes are performed in the same state then they will not be performed in parallel, the analyst will have correctly identified processes that can be performed in any order, and a desicion will be made during the translation regarding the order to perform them. Whether the processes were performed sequentially or in parrallel would thus be an architectural issue. Dave Harris 'archive.9908' -- Subject: (SMU) Action Semantics RFP progress info Scott Finnie writes to shlaer-mellor-users: -------------------------------------------------------------------- The latest PT newletter states uml.simware.com as the site for progress on the Action Semantics RFP; however this seems to be a secure site. The only info available is the list of companies (not exactly "great details on the work so far" as the newsletter puts it). Does anyone know how to access further information on the site? - Scott. -- Scott Finnie Tel. (+44) 131 331 7756 Telecoms Systems Division Hewlett Packard Ltd. mailto:sfinnie@sqf.hp.com Subject: Re: (SMU) Action Semantics RFP progress info "David Harris" writes to shlaer-mellor-users: -------------------------------------------------------------------- Scott Finnie wrote: > Scott Finnie writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > The latest PT newletter states uml.simware.com as the site for progress > on the Action Semantics RFP; however this seems to be a secure site. > The only info available is the list of companies (not exactly "great > details on the work so far" as the newsletter puts it). > > Does anyone know how to access further information on the site? > > - Scott. Kennedy Carter have a web page covering the action semantics it also gives an e-mail address to get more information, although I have not tried requesting anything. the web page is: http://www.kc.com/html/action_semantics.html Dave Subject: Re: (SMU) Action Semantics RFP progress info "Stephen J. Mellor" writes to shlaer-mellor-users: -------------------------------------------------------------------- The site has the correct address, and yes, it is secure. However, at the last meeting of the Consortium we agreed to make the site open. Unfortunately, the person responsible seems to be on vacation or something. I'll follow up and tell you as soon as we have the site opened. Sorry about this.....must be a _really long_ vacation. -- steve mellor At 12:27 PM 8/5/99 +0100, Scott Finnie wrote: >Scott Finnie writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >The latest PT newletter states uml.simware.com as the site for progress >on the Action Semantics RFP; however this seems to be a secure site. >The only info available is the list of companies (not exactly "great >details on the work so far" as the newsletter puts it). > >Does anyone know how to access further information on the site? > > - Scott. > >-- >Scott Finnie Tel. (+44) 131 331 7756 >Telecoms Systems Division >Hewlett Packard Ltd. mailto:sfinnie@sqf.hp.com > > > > Subject: Re: (SMU) Action Semantics RFP progress info Campbell McCausland writes to shlaer-mellor-users: -------------------------------------------------------------------- Thanks a lot, As an action language translation practitioner, I'm VERY keen to see what you've been up to. - campbell "Stephen J. Mellor" wrote: > "Stephen J. Mellor" writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > The site has the correct address, and yes, it is secure. > > However, at the last meeting of the Consortium we agreed to make > the site open. Unfortunately, the person responsible seems to be > on vacation or something. I'll follow up and tell you as soon as > we have the site opened. > > Sorry about this.....must be a _really long_ vacation. > > -- steve mellor > > At 12:27 PM 8/5/99 +0100, Scott Finnie wrote: > >Scott Finnie writes to shlaer-mellor-users: > >-------------------------------------------------------------------- > > > >The latest PT newletter states uml.simware.com as the site for progress > >on the Action Semantics RFP; however this seems to be a secure site. > >The only info available is the list of companies (not exactly "great > >details on the work so far" as the newsletter puts it). > > > >Does anyone know how to access further information on the site? > > > > - Scott. > > > >-- > >Scott Finnie Tel. (+44) 131 331 7756 > >Telecoms Systems Division > >Hewlett Packard Ltd. mailto:sfinnie@sqf.hp.com > > > > > > > > Subject: Re: (SMU) Action Semantics RFP progress info Scott Finnie writes to shlaer-mellor-users: -------------------------------------------------------------------- For info, the site is now accessible. Thanks Steve/Erik. - Scott. Erik Hagstrom wrote: I got in ok. It is secure only in the sense that it uses SSL. Just click on the "Start SSL Encryption" link. Should work. It took me to https://UML.Simware.COM/SHTML/SimUML_Index.html Enjoy! -- "Stephen J. Mellor" wrote: > The site has the correct address, and yes, it is secure. > > However, at the last meeting of the Consortium we agreed to make > the site open. Unfortunately, the person responsible seems to be > on vacation or something. I'll follow up and tell you as soon as > we have the site opened. > > Sorry about this.....must be a _really long_ vacation. > > -- steve mellor > > At 12:27 PM 8/5/99 +0100, Scott Finnie wrote: > >Scott Finnie writes to shlaer-mellor-users: > >-------------------------------------------------------------------- > > > >The latest PT newletter states uml.simware.com as the site for progress > >on the Action Semantics RFP; however this seems to be a secure site. > >The only info available is the list of companies (not exactly "great > >details on the work so far" as the newsletter puts it). > > > >Does anyone know how to access further information on the site? > > > > - Scott. > > > >-- > >Scott Finnie Tel. (+44) 131 331 7756 > >Telecoms Systems Division > >Hewlett Packard Ltd. mailto:sfinnie@sqf.hp.com > > > > > > > > -- Scott Finnie Tel. (+44) 131 331 7756 Telecoms Systems Division Hewlett Packard Ltd. mailto:sfinnie@sqf.hp.com Subject: Re: (SMU) OMG ASL RFP for UML Brad Appleton writes to shlaer-mellor-users: -------------------------------------------------------------------- Forgive me - I thought I was following this thread but it appears not closely enough because I'm still confused on a few things ... what stage is the current ASL RFP for UML at? At first I thought there was already a proposal on the table and it was only a short matter of time (3-6 months) before its acceptance. Now I'm wondering if the RFP is near its very beginning rather than at its end. Lastly, I was under the impression the ASL proposal was going to specify syntax and semantics as well as any additional graphical symbols (if they are used) so that other tools could support the same textual syntax and graphical representation rather than each making up their own to support common definitions, or common syntax but with their own semantics or graphic representations. If someone could point in the right direction to find out these answers I'd be very appreciative. Thanks! -- Brad Appleton http://www.enteract.com/~bradapp/ "And miles to go before I sleep." -- Robert Frost Subject: Re: (SMU) OMG ASL RFP for UML "Stephen J. Mellor" writes to shlaer-mellor-users: -------------------------------------------------------------------- The short answers are as follows: * The RFP has been issued (last Novenmber) * There is a consortium of companies who are working together to make a response. * The response(s) are due in September * (Whoops!) We're asking for an extension to next year * The scope of the RFP is to produce a semantic model * And (optionally) a syntax or two * There is no _explicit_ provision for a graphical syntax, though oe would hope that a combination of state machines and activity diagrams would do the job. * For more answers, see http://uml.simware.com If that's too short, ask some more, and I'll do what I can. -- steve At 12:00 PM 8/20/99 -0500, Brad Appleton wrote: >Brad Appleton writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >Forgive me - I thought I was following this thread but it appears not >closely enough because I'm still confused on a few things ... what stage >is the current ASL RFP for UML at? At first I thought there was already >a proposal on the table and it was only a short matter of time (3-6 >months) before its acceptance. Now I'm wondering if the RFP is near its >very beginning rather than at its end. > >Lastly, I was under the impression the ASL proposal was going to specify >syntax and semantics as well as any additional graphical symbols (if they >are used) so that other tools could support the same textual syntax and >graphical representation rather than each making up their own to support >common definitions, or common syntax but with their own semantics or >graphic representations. > >If someone could point in the right direction to find out these >answers I'd be very appreciative. > >Thanks! >-- >Brad Appleton http://www.enteract.com/~bradapp/ > "And miles to go before I sleep." -- Robert Frost > > Subject: (SMU) RE: (ROSE) code generation for a finite state machine Tristan Pye writes to shlaer-mellor-users: -------------------------------------------------------------------- I don't think that Rose can do this directly, but you could possibly use the API calls to extract the state machine meta-data out of Rose, (See the Extensibility Reference for details) and write a bespoke program to navigate through these to generate your code - it is entirely possible to generate 100% of your code using this method (if your state machines are good enough!) This sort of thing is done all the time when doing Shlaer-Mellor OOA/RD. There is a similar mailing list to this one for Shlaer Mellor - I'll forward this message on to it - I'm sure there will be plenty of people there who would be able to help you out if you need more detail. To subscribe to the SM user group send a message to majordomo@projtech.com with subscribe shlaer-mellor-users in the _body_ of the message. The e-mail address is optional; if not provided, you will be subscribed under the address from which you sent the message. Hope this helps, Tristan. Note to SM User Group - don't forget to reply direct to Dimitri, just in case he hasn't subscribed yet!! > -----Original Message----- > From: Dimitri Zuodar [mailto:zuodar_d@yahoo.com] > Sent: Friday, 27 August 1999 09:47 > To: rose_forum@Rational.Com > Subject: (ROSE) code generation for a finite state machine > > > > Hello, > > I'm a student working on a project to get my diplome. The subject is > "C++ code generation for a finite state machine". Does anyone know > something about such a code generation, can Rational Rose generate the > class-files for the different states and events if you would draw them > into a state-diagramma or does Rational Rose produce a file which can > be invested by another program so that this program can produce the > different classes? > > Thank you, > Dimitri Zuodar > > __________________________________________________ > Do You Yahoo!? > Bid and sell for free at http://auctions.yahoo.com > > ************************************************************** > ********** > * Rose Forum is a public venue for ideas and discussions. > * For technical support, visit http://www.rational.com/support > * > * Admin.Subscription Requests: majordomo@rational.com > * Archive of messages: > http://www.rational.com/products/rose/usergroups/rose_forum.jt mpl * Other Requests: rose_forum-owner@rational.com * * To unsubscribe from the list, please send email * * To: majordomo@rational.com * Subject: * Body: unsubscribe rose_forum * ************************************************************************* Subject: RE: (SMU) RE: (ROSE) code generation for a finite state machine "Levkoff, Bruce" writes to shlaer-mellor-users: -------------------------------------------------------------------- Rational's Rose Real-Time will generate classes and FSM code from pictures. Their execution engine is based on ObjectTime Limited's product. Bruce -----Original Message----- From: Tristan Pye [mailto:Tristan.Pye@aeroint.com] Sent: Friday, August 27, 1999 7:38 AM To: 'Dimitri Zuodar' Cc: rose_forum@Rational.Com; 'shlaer-mellor-users@projtech.com' Subject: (SMU) RE: (ROSE) code generation for a finite state machine Tristan Pye writes to shlaer-mellor-users: -------------------------------------------------------------------- I don't think that Rose can do this directly, but you could possibly use the API calls to extract the state machine meta-data out of Rose, (See the Extensibility Reference for details) and write a bespoke program to navigate through these to generate your code - it is entirely possible to generate 100% of your code using this method (if your state machines are good enough!) This sort of thing is done all the time when doing Shlaer-Mellor OOA/RD. There is a similar mailing list to this one for Shlaer Mellor - I'll forward this message on to it - I'm sure there will be plenty of people there who would be able to help you out if you need more detail. To subscribe to the SM user group send a message to majordomo@projtech.com with subscribe shlaer-mellor-users in the _body_ of the message. The e-mail address is optional; if not provided, you will be subscribed under the address from which you sent the message. Hope this helps, Tristan. Note to SM User Group - don't forget to reply direct to Dimitri, just in case he hasn't subscribed yet!! > -----Original Message----- > From: Dimitri Zuodar [mailto:zuodar_d@yahoo.com] > Sent: Friday, 27 August 1999 09:47 > To: rose_forum@Rational.Com > Subject: (ROSE) code generation for a finite state machine > > > > Hello, > > I'm a student working on a project to get my diplome. The subject is > "C++ code generation for a finite state machine". Does anyone know > something about such a code generation, can Rational Rose generate the > class-files for the different states and events if you would draw them > into a state-diagramma or does Rational Rose produce a file which can > be invested by another program so that this program can produce the > different classes? > > Thank you, > Dimitri Zuodar > > __________________________________________________ > Do You Yahoo!? > Bid and sell for free at http://auctions.yahoo.com > > ************************************************************** > ********** > * Rose Forum is a public venue for ideas and discussions. > * For technical support, visit http://www.rational.com/support > * > * Admin.Subscription Requests: majordomo@rational.com > * Archive of messages: > http://www.rational.com/products/rose/usergroups/rose_forum.jt mpl * Other Requests: rose_forum-owner@rational.com * * To unsubscribe from the list, please send email * * To: majordomo@rational.com * Subject: * Body: unsubscribe rose_forum * ************************************************************************* Subject: RE: (SMU) RE: (ROSE) code generation for a finite state machine "Paul Higham" writes to shlaer-mellor-users: -------------------------------------------------------------------- Unless ObjectTime has changed recently, you still need to "lovingly handcraft with care" some of the C++. Estimates from Rationale vary from 30% to 90% code generation from the models, I have never seen a claim of 100% code generation. paul higham ESN: 852 7915 -----Original Message----- From: Levkoff, Bruce [SMTP:bruce.levkoff@cytyc.com] Sent: Friday, August 27, 1999 11:06 AM To: 'shlaer-mellor-users@projtech.com' Subject: RE: (SMU) RE: (ROSE) code generation for a finite state machine "Levkoff, Bruce" writes to shlaer-mellor-users: -------------------------------------------------------------------- Rational's Rose Real-Time will generate classes and FSM code from pictures. Their execution engine is based on ObjectTime Limited's product. Bruce -----Original Message----- From: Tristan Pye [mailto:Tristan.Pye@aeroint.com] Sent: Friday, August 27, 1999 7:38 AM To: 'Dimitri Zuodar' Cc: rose_forum@Rational.Com; 'shlaer-mellor-users@projtech.com' Subject: (SMU) RE: (ROSE) code generation for a finite state machine Tristan Pye writes to shlaer-mellor-users: -------------------------------------------------------------------- I don't think that Rose can do this directly, but you could possibly use the API calls to extract the state machine meta-data out of Rose, (See the Extensibility Reference for details) and write a bespoke program to navigate through these to generate your code - it is entirely possible to generate 100% of your code using this method (if your state machines are good enough!) This sort of thing is done all the time when doing Shlaer-Mellor OOA/RD. There is a similar mailing list to this one for Shlaer Mellor - I'll forward this message on to it - I'm sure there will be plenty of people there who would be able to help you out if you need more detail. To subscribe to the SM user group send a message to majordomo@projtech.com with subscribe shlaer-mellor-users in the _body_ of the message. The e-mail address is optional; if not provided, you will be subscribed under the address from which you sent the message. Hope this helps, Tristan. Note to SM User Group - don't forget to reply direct to Dimitri, just in case he hasn't subscribed yet!! > -----Original Message----- > From: Dimitri Zuodar [mailto:zuodar_d@yahoo.com] > Sent: Friday, 27 August 1999 09:47 > To: rose_forum@Rational.Com > Subject: (ROSE) code generation for a finite state machine > > > > Hello, > > I'm a student working on a project to get my diplome. The subject is > "C++ code generation for a finite state machine". Does anyone know > something about such a code generation, can Rational Rose generate the > class-files for the different states and events if you would draw them > into a state-diagramma or does Rational Rose produce a file which can > be invested by another program so that this program can produce the > different classes? > > Thank you, > Dimitri Zuodar > > __________________________________________________ > Do You Yahoo!? > Bid and sell for free at http://auctions.yahoo.com > > ************************************************************** > ********** > * Rose Forum is a public venue for ideas and discussions. > * For technical support, visit http://www.rational.com/support > * > * Admin.Subscription Requests: majordomo@rational.com > * Archive of messages: > http://www.rational.com/products/rose/usergroups/rose_forum.jt mpl * Other Requests: rose_forum-owner@rational.com * * To unsubscribe from the list, please send email * * To: majordomo@rational.com * Subject: * Body: unsubscribe rose_forum * ************************************************************************* Subject: RE: (SMU) RE: (ROSE) code generation for a finite state machine "Levkoff, Bruce" writes to shlaer-mellor-users: -------------------------------------------------------------------- Right. State actions are written in C++ like before. What you get are class templates and populated transition tables. This is seen as a benefit by some. Bruce -----Original Message----- From: Paul Higham [mailto:paulh@nortelnetworks.com] Sent: Friday, August 27, 1999 1:58 PM To: shlaer-mellor-users@projtech.com Subject: RE: (SMU) RE: (ROSE) code generation for a finite state machine "Paul Higham" writes to shlaer-mellor-users: -------------------------------------------------------------------- Unless ObjectTime has changed recently, you still need to "lovingly handcraft with care" some of the C++. Estimates from Rationale vary from 30% to 90% code generation from the models, I have never seen a claim of 100% code generation. paul higham ESN: 852 7915 'archive.9909' -- 'archive.9910' -- Subject: (SMU) Need help conceptualizing Allen Theobald writes to shlaer-mellor-users: -------------------------------------------------------------------- Greetings All! This may seem like a pretty generic question, but how do you know whether objects belong in say a UI domain, or whether the belong in an APPLICATION domain? Lets say (for the sake of discussion) I have a mapping of one-to-many that I want to display in list form (again for the sake of discussion we'll just display the key). When I click on a "many" item key I want to highlight the "one" item key. Lets use <> for the example: 1. DOG (D) 2. DOG OWNER (DO) * Dog ID * Owner ID o Dog Name owns o Owner Name o Sex <<-----------------> o Address o Breed R1 o Phone Number o Weight owned by o Owner ID (R1) So, when I click on a Dog ID in the UI list, the Owner ID is highlighted. I assume that DOG and DOG OWNER are objecs in the APPLICATION domain. Are there such things as UI_DOG and UI_DOG OWNER? I know this is kind-of vague, but if we assume that the DOGs and DOG OWNERs are already created and associated, how would you initially populate the UI domain lists from the APPLICATION domain associations? Via a bridge? How is Dog ID and Owner ID exported out of the APPLICATION domain into the UI domain? Would someone help code this up? I am having trouble conceptualizing! (esp. the bridge). Kind Regards, Allen Theobald Nova Engineering, Inc. Subject: RE: (SMU) Need help conceptualizing David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- -----Original Message----- From: Allen Theobald [mailto:theobaam@email.uc.edu] Sent: Tuesday, October 05, 1999 2:14 PM To: shlaer-mellor-users@projtech.com Subject: (SMU) Need help conceptualizing Allen Theobald writes to shlaer-mellor-users: > This may seem like a pretty generic question, but how do you know > whether objects belong in say a UI domain, or whether the belong in an > APPLICATION domain? > > ... > > So, when I click on a Dog ID in the UI list, the Owner ID is > highlighted. > > I assume that DOG and DOG OWNER are objecs in the APPLICATION > domain. Are there such things as UI_DOG and UI_DOG OWNER? There will be no UI_DOG object. That should set off your domain pollution alarms. The objects in the UI domain will come from your description of that domain. You mentioned concepts like "list" "click" "highlight" The real question here is whether or not the "owned by" relationship has a counterpart in the UI domain, or if the navigation is done in the application domain. The general answer is "it depends", but in this case, I'll stick my neck out and say "yes". My reasoning is that a generalized UI domain will be able to do this highlighting on things other than dogs and owners. In a network monitor application, you might want to link computers to subnets, etc. To avoid repetition, we need to localise the behaviour. You'd end up with huge numbers of bridges all doing the same thing if you wanted to do the navigation in the application domains. We want to make the UI fairly general, so lets take the simple approach: list(*name, .position) list_item(*name, .list_name(R1), .is_highlighted) list_item_link(*item_1(R2), *item_2(R2)) (.list_name in list_item should be an identifier, but its simpler not to do that here. for this model, no 2 list items can have the same name, even if they're in different lists) R2 is reflexive on list_item, and is symmetric. [aside: I like automagic stuff. So I'd probably have list_item { .is_selected : bool .is_highlighted(M) : bool <= exists(this->R2->list_item.is_selected); } and then have an automagic link onto the actual display system to control highlighting from the (M) attribute. But in OOA96, the method doesn't permit this type of automagic. ] Anyway, however you do it, you can hopefully see how the UI would work. In fact, you could simulate it on a single-domain simulator. But now you want to link it to the application domain. The first bit is easy. You want to instances of "list": one named "dog"; the other named "owner". But now you want to automagically populate the list_item and list_item_link from the application domain. Lets look at the requirement: Create list("Dog") Create list("Owner") Foreach Dog, create list_item(Dog.name, "Dog", default) Foreach Owner, create list_item(Owner.name, "Owner", default) Foreach Dog, create list_item_link(Dog.name, Owner.name) (remember, I didn't include the list name in the identifier of list_item, which keeps list_item_link simpler) This get more complex when you realise that you need to also specify what to do when Dogs and Owners are deleted, and when the Dog.owner referential attribute is changed. What you actually want to do is to bridge the respective create/delete/write accessors from the application domain to SDFDs in the UI domain that create/delete list_item and list_item_link objects. With a clever architecture (or a really dumb one!), this group of statements could be used to maintain the population of the UI domain. You simply process them every time a DOG or OWNER is created or deleted; or when the Dog.owner referential attribute is written. Obviously, the clever architecture would only process the changes. This solves the problem you presented, but not the general problem: what happens when you want to right-click on a dog in the UI and cause something to happen in the application domain. We need to link UI identifiers back to the application domain. "half tables" are simply tables that link elements from one domain to elements in another; where "elements" may be instances in either the domain model, or its meta-model. Although Steve does't like it, you can conceptualise the half-table as an associative object that lives in the bridge between the two domains. Just remember that its only a simili: its not really an associative object (nor even an object). It can **never** have a state model! Then Steve will be happy :-) We need 2 tables: item_to_dog_map(item_name, dog_name) item_to_owner_map(item_name, owner_name) There's no good reason to maintain a bidirection link to the list_item_link objects. However, you might include the appropriate table in the bridge for completeness: to keep a consistent level of indirection between the domains. With this table in place (maintained by either an extention of the mechanism outlined above, or via a declarative specification, or simply populated statically.), an architecture can translate identifiers in the UI domain to identifiers in the application domain, and visa versa. It can happen automatically! Hope this helps. Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: Re: (SMU) Need help conceptualizing lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Theobald... A somewhat different spin than Whipp's that is oriented around towards what sort of object should be in a UI domain... > This may seem like a pretty generic question, but how do you know > whether objects belong in say a UI domain, or whether the belong in an > APPLICATION domain? I find it useful to think of two levels of abstraction when dealing with UI domains. The lowest level is really a low level implementation domain that deals with objects like 'Text Box' or 'Radio Button' and it has no knowledge of the semantics of the values displayed or the relationships between controls. Typically this domain is a third party domain that provides graphic widgets and window management services. It is not very relevant to your example. For the sake of exposition let's call this the Window Manager domain. At a higher level of abstraction I think of a domain that knows about aggregates of data to be displayed. That is, it knows about the overall logic of display collections. A basic object might be Screen. Though Screen corresponds nicely with a Window in the Window Manager domain it is a quite different view of the display material. Screen has no knowledge of the mechanics of display and update of attributes; it relies on the Window Manager domain's services for this. It also has little knowledge of the semantics of the application -- typically only enough to perform data entry validation. It is mostly concerned with handling aggregates of data values strictly according to the way the end user views them. As such, I tend to regard this domain as a high level implementation domain. But you could make a case that it is a low level application domain because it necessarily reflects the user's view of the specific application. > Lets use <> for the example: > > 1. DOG (D) 2. DOG OWNER (DO) > * Dog ID * Owner ID > o Dog Name owns o Owner Name > o Sex <<-----------------> o Address > o Breed R1 o Phone Number > o Weight owned by > o Owner ID (R1) > > So, when I click on a Dog ID in the UI list, the Owner ID is > highlighted. > > I assume that DOG and DOG OWNER are objecs in the APPLICATION domain. > Are there such things as UI_DOG and UI_DOG OWNER? I agree with Whipp that things like UI_DOG should raise warning flags. The GUI domain should be concerned with views of data values, not the semantics of the data. I would tend organize the GUI domain's objects around the screen entities that the user will see. However, it would not surprise me to have object abstractions of the same underlying entities in both areas. You mention a UI list of Dog IDs. As you describe it, there seems to be a screen with two lists, one of Dog IDs and one of Owner IDs, so that when you select one list entry the corresponding member of the other list is highlighted. I would probably have some 'XXX Screen' object for the display to manage both lists where 'XXX' describes the nature or purpose of that particular display (e.g., 'Ownership Screen'). Now 'Ownership Screen' doesn't know about things like controls, so you have a problem representing a variable list of attribute values. Let's look at some ways for modeling this. It seems likely that you will have a screen for displaying individual dog data, 'Dog Screen', and individual owner data, 'Dog Owner Screen'. If so, all you need are relationships between 'Ownership Screen' and these screens. When it comes time to talk to the Window Manager bridge to populate the display, you simply navigate the relationships to get the values for the event data packet sent to Window Manager. To manage the highlighting you would also navigate a relationship between 'Dog Screen' and 'Dog Owner Screen'. If you do not display separate windows for dog and owner information, then you have to solve the attribute list problem differently. One way would be to introduce 'Dog List' and 'Owner List' objects into the domain that simply managed lists of names. These might be instantiated at startup or whenever 'Ownership Screen' is activated, depending upon whether the application allows dogs and owners to be added and deleted during execution. The instantiation would be a straight forward query of the application for the relevant names. Each could have an attribute for the currently selected member. When a dog is selected (via an event from Window Manager), 'Ownership Screen' could ask the application who the owner was, then set the currently selected owner in the list, and finally send an event to Window Manager to highlight it. Another way is to not have either list of names explicitly in the GUI domain. In reality the domain really doesn't need to know about the individual names (in the minimal example thusfar). The individual names simply pass through the domain from the application to the Window Manager and flow of control in the domain is not dependent upon individual names. Therefore the code for handling the lists could be realized (i.e., written directly) and the movement of the data could be handled by accessors that were really methods for an architectural object that handled lists of strings. Then all you would need in my 'Ownership Screen' object would be two attributes for Dog List and Owner List. [To handle the highlighting you would also need attributes for Current Dog and Current Owner. These could be set based upon communications with the external domains.] That is, though the attributes are really lists, in the GUI domain context they are atomic data abstractions because the domain's abstraction does not care about the individual elements. There are two important points here. First, there are several ways to model the GUI. Second -- and more importantly -- the GUI domain model should depend upon what you need to display, not the organization of the application. Hopefully my examples dempnstrated that aspects of the display can drive to very different solutions that may or may not involve different views of the same entities that are abstracted in the application. Thus, if you happen to have a 'Dog Screen' that has much the same data as an application 'DOG' object, that should be rather coincidental. Any 'Dog Screen' object in the GUI will exist because it encapsulates a set of display values that are manipulated as a whole in the GUI, not because the application is about recording dog licenses. In particular, the 'Dog Screen' object will have no clue what a dog is. Note that UI_DOG may look just like 'Dog Screen' in the display and the information model but the names convey very different ideas. UI_DOG implies the object is a flavor of dog while 'Dog Screen' suggests a flavor of display. The former is appropriate for the application while the latter is appropriate for the GUI and that distinction is important -- if you can't make it, then something is wrong. A key consideration in defining a Domain Chart is to settle upon the correct level of abstraction and subject matter for each domain. These should be different for each domain. The acid test for the objects is whether they are consistent with the domain's abstraction and subject matter. So long as they are consistent it is then fair to have different abstractions of the same entity in different domains. If they aren't consistent, they don't belong in the domain regardless of what underlying entity they model. > I know this is kind-of vague, but if we assume that the DOGs and DOG > OWNERs are already created and associated, how would you initially > populate the UI domain lists from the APPLICATION domain associations? > Via a bridge? As Whipp suggests, there are lots of ways to implement such a domain so that most of the processing can be semi-automated in either the bridges or in the architecture (i.e., the GUI domain is likely to have few active objects). I tend to lean towards bridges while Whipp leans towards architectural legerdemain -- which is a quibble in itself because most people regard the bridges as part of the architecture anyway. Either way, the GUI domain itself tends to be rather dumb. I tend to view it as simply a collection of dumb data holders that really constitutes a kind of smart bridge between the application and the Window Manager that allows decoupling of the application's structure from the display structure -- something that is rather important when porting across platforms. I would tend to limit the GUI domain's functionality to data validation and temporary caching of data changes until the user is willing to commit them to the application (i.e., it is often more convenient to enforce some data integrity rules in the GUI than in the application itself). > How is Dog ID and Owner ID exported out of the APPLICATION domain into > the UI domain? As I indicated above, one could handle the highlighting by navigating a relationship in the GUI or by asking the application who owns a dog. In the former case the IDs would be created when the instances were created, using techniques such as those suggested by Whipp. In the latter case the GUI domain could issue an external event with a synchronous return. The bridge would navigate the relationship in the application domain between DOG and DOG OWNER and return an Owner ID. > Would someone help code this up? I am having trouble conceptualizing! > (esp. the bridge). The way I find useful to think of bridges is that they link the external interfaces of domains. Each domain has an interface that it presents to the external world. The events that go to/from a bridge in the domain represent the external interface of the domain. That interface is invariant with context (i.e., it doesn't change when you port the domain to another application). The role of the bridge is to provide glue code that matches up these interfaces. Ideally what happens is that event A1 generated in domain A gets mapped into event B1 in domain B with the same data packet. So the bridge accepts A1 from A, transfers the data packet from A1 to B1, and places B1 on B's event queue. Unfortunately interfaces rarely match up this nicely, so the bridge usually has to have smarter code. For example, A1 might be a high level request that has to be translated into two events, B1 and B2, for B's low level interface. Now the bridge has to split the A1 data packet into two data packets and place two events on B's event queue. [The Wormholes paper on the PT web site seems to assume only syntactic translations (e.g., units conversion, data packet ordering, etc.) can be made in bridges. I am in the camp that believes that bridges have to support semantic translations (1:M events, intermediate processing, etc.) as well. But I digress...] Many CASE tools have a surrogate object in the domain to which the event is directed. That surrogate object is associated with the bridge. Often that surrogate object will be implemented as a real object that has a method to process each event generated in the domain (the event manager will do a table lookup to determine what method to invoke for the event ID) and a method to process any event sent to the domain from other domains. These methods represent the domain's external interface and this interface is invariant once the domain's services have been defined. One way the bridge could be implemented is in the *implementation* of the surrogate object's methods. So when the method that processes event A1 in A's surrogate object is invoked, its implementation will know that it should invoke a particualr method in B's surrogate object (i.e., B's external interface). That method in B's surrogate object will place event B1 on the queue with the data packet. So if you are doing C++, you might simply substitute a new .cpp file for the the surrogate objects in the new application while keeping the same .hpp file. Or you can use polymorphic wrappers, etc. [Note that the surrogate object is two-sided: one side defines what requests the domain can send out while the other defines what services the domain will perform. The method implementations that process incoming requests will not change when the domain is ported because the domain internals do not change. However, the implementations for the outgoing requests do change because the target domains' interfaces change. Thus you might want two .cpp files: one for the incoming methods that is not replaced and one for the outgoing methods that is replaced in a new context.] In my example above things are even simpler if the GUI domain asks the application domain who owns a given dog. This can be done synchronously by invoking a synchronous wormhole (i.e., without an event). Many architectures implement a 'synchronous service' for this and associate it with the surrogate object (i.e., in the implementation it is mapped directly to a surrogate object method). This method invokes the corresponding application domain's method, whose implementation finds the DOG instance (the 'address' to which the event is sent), navigates to the relevant DOG OWNER, and returns the Owner ID. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: RE: (SMU) Need help conceptualizing David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- > lahman wrote > Responding to Theobald... > > A somewhat different spin than Whipp's that is oriented around towards > what sort of object should be in a UI domain... > ... > I find it useful to think of two levels of abstraction when > dealing with UI domains. The lowest level is really a low > level implementation domain that deals with objects like > 'Text Box' or 'Radio Button' and it has no knowledge of the > semantics of the values displayed or the relationships > between controls. I agree that this thing exists, and is separate from the domain that describes the mechanics that underpin the user interactions with their gui. But: > You mention a UI list of Dog IDs. As you describe it, there > seems to be a screen with two lists, one of Dog IDs and > one of Owner IDs, so that when you select one list entry the > corresponding member of the other list is highlighted. I > would probably have some 'XXX Screen' object for the display > to manage both lists where 'XXX' describes the nature or > purpose of that particular display (e.g., 'Ownership Screen'). ***AWOOGA***AWOOGA*** domain pollution, disinfect imediately!!! As soon as you start subtyping in one domain to reflect the subject matter of the domains you expect to talk to, then you lose the potential for domain reuse. Leon Starr put it nicely when he said you should avoid using the same noun in 2 domains Use the 'user-interactions' domain do describe the principles that underly the interactions. For exampe, the ability to click on one item and have a linked item be highlighted. Then you describe the actual user interface in the population of that domain. If you have an object SCREEN(*name, ...) then you can populate it with instances named "Dog" and "Owner". If possible, you should try to describe specifics using data, not OOA. You use the OOA to define the meta-model that underlies your user interface model. The OOA model ensures that the correct magic happens when components of your specific-UI model interact. As a general rule when modelling, always try to push as much as possible (*but no more!*) into the population of a domain. Don't build yourself a universal computer, but do abstract away enough details to keep the model clean. If the replacement of dogs with cats requires a UI domain change then there's something very wrong. If your data becomes a programming language, then there's also something wrong. > > I know this is kind-of vague, but if we assume that the > > DOGs and DOG OWNERs are already created and associated, > > how would you initially populate the UI domain lists from > > the APPLICATION domain associations? Via a bridge? > Either way, the GUI domain itself tends to be rather dumb. > I tend to view it as simply a collection of dumb data holders > that really constitutes a kind of smart bridge between the > application and the Window Manager that allows decoupling of > the application's structure from the display structure -- > something that is rather important when porting across > platforms. I would tend to limit the GUI domain's > functionality to data validation and temporary caching of > data changes until the user is willing to commit them to the > application If the GUI domain is too dumb, then perhaps it doesn't exist. I'd look again at the domain chart an make sure that the mission statements are in balence across the system. If you have a simple domain with a fat interface then you can almost guarentee that the domain chart is flawed. (You can probably live with it, and get a working system, but that's engineering pragmatism, not the theoretical ideal). > > How is Dog ID and Owner ID exported out of the APPLICATION > > domain into the UI domain? > > As I indicated above, one could handle the highlighting by > navigating a relationship in the GUI or by asking the > application who owns a dog. In the former case the IDs would > be created when the instances were created, using techniques > such as those suggested by Whipp. In the latter case the GUI > domain could issue an external event with a synchronous > return. The bridge would navigate the relationship in the > application domain between DOG and DOG OWNER and return an > Owner ID. I really do *not* like this second approach. It requires you to perform a multi-domain simulation to validate the UI domain. This is BAD. You can construct an active test harness, but that too adds additional complexity to the testing of what is a trivial domain. If you put the link object into the domain then the domain becomes complete enough to be simulated in isolation, and the test harness becomes trivial. If there is complexity associated with a problem, then its better to put it in the domain of that problem than outside it. I'm currently preparing a post that discusses the 'complete enough for single-domain simulation' philosophy. Hopefully it'll be ready by the end of this week. Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: (SMU) Subject Matter are Single Domain David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- There is a school of thought in SM that likes the idea of "implicit bridges". I share that viewpoint, and I'd like to explore how we should think about bridging from this perspective. When we talk of implicit bridges, we do not mean that they are implicit from the system's point of view. Only from the domain's point of view. The goal is to maximize the usefulness of an isolated domain both for single-domain simulation and for static analysis of the properties of the domain. In his presentation on bridging at the SMUG, Campbell McCausland spoke of the need to eliminate interfaces from a model. He argued that mere thought of the interface introduces coupling, however slight. The domain should be its own universe. Explicit bridging may decouple the data types and the flow of control, but it still contains the coupling that says "and now something happens outside my universe". If a benevolent god chooses to send in an external input then that is God's prerogative But you should never solicit such an action. Furthermore, the underlying meta-model is immutable. If a God exists, then that God will never break the laws of physics ("For proof denies faith, and without faith I am nothing" - God, quoted in Hitchhikers Guide to the Galaxy :-) ). External inputs are only visible from an SDFD: all experiments conducted within the domain will indicate that SDFD activation is random. That's enough of the metaphysics. Lets look at the elimination of explicit bridging in practice. Consider the Robot from ODMS (I'm sure you remember this from the training course): object Robot(sn=R, num=1) { attribute x (type=column_number, desc="x coord of the robot"); attribute y (type=row_number, desc="y coord of the robot"); event Robot_Move_Complete (num=1); // ... state Moving_To_Next_Slot(num=4) { input new_x (type=column_number, pos=1); input new_y (type=row_number, pos=2); action (new_x, new_y) { x := new_x; y := new_y; wormhole: move_robot(new_x, new_y, "generate Robot_Move_Complete"); } transition Robot_Move_Complete -> Extending_Grabber; } state Extending_Grabber(num=5) { // ... } // ... } This uses explicit bridges. The model explicitly invokes the 'move_robot' wormhole, and tells it to generate the event when it's done. We are soliciting behavior outside our universe. We would like to eliminate this. A naive implicit bridging approach might look like this: action (new_x, new_y) { x := new_x; y := new_y; } This assumes that the act of assigning x and y will implicitly trigger the wormhole; and that this implicit wormhole will return the expected event. When the attributes are read, the value returned will be the current location of the robot, determined using a half-table mapping to the hardware domain But this causes many problems: in the general case, the writes to x&y may occur at different times (e.g. from different states). Any attempt to document/formalize this interface will become complex. Even the meaning of the x,y attributes is a puzzle: are they actual position, or desired position?. Can the model ever guarantee that they are stable? How do you simulate the behavior in single-domain simulation? What is the domain-replacement strategy? So go back to the God analogy, this model acts entirely on faith. Nothing in the model indicates that the R1 event is expected. We aren't even asking God to help: we're just assuming that God knows what needs to happen. Usually in these situations, God sits back and has a good laugh. And then sends a plague of locusts (and other bugs). To make the implicit bridging work, we must ensure we have a complete and correct model: one that will simulate as a single domain. This model will use the known laws of physics (OOA), and not rely on the intervention of a hypothetical God (or external domain). Here is the slightly surprising result: action (new_x, new_y) { x := new_x; y := new_y; generate R1: Robot_Move_Complete(); } The solicited event, R1, is explicitly generated. The bridge is completely hidden from the domain model, and so it will now simulate as a single domain. No test harness is necessary to generate the solicited event. Either the action completes successfully (the attributes are correctly set and the event generated) and the event is delivered, or an exception situation has occurred (to be handed by the [sic] OOA exception mechanism) The bridging will be implemented as a meta-mapping. The accessor and event generation processes are mapped to the appropriate wormholes. The duration of the robot move defines the event delivery time. These wormholes must maintain the OOA time rules, but that is a problem for the bridge, not the ODMS domain. The behavior of the domain will be as if it were a single-domain simulation. Lets go back to a question I posed earlier: what is the meaning of x and y? The answer is obvious: they are the position of the robot. "But", I hear you say, "that's not right. You're writing a desired position". This is not the same as the actual position until the robot move is complete. This is slightly subtle. First look at the single domain simulation. Here everything works fine. The position is written and the event delivered. There is no ambiguity. So what about the multi-domain case. What's the problem? Lets state the problem very clearly. The set accessors write a desired location. The get accessors get the actual location. In effect, the set accessor is no longer synchronous within the time-scope of the action. But the OOA formalism requires accessors to be synchronous. How do we resolve this requirement? First, note that when the robot move is complete, the value returned by the 'get' will be identical with the value of 'set'. If this is not true, then we have an exceptional condition, to be handled by an exception mechanism. So we can guarantee the value of the set accessor. Its just the timing that must be fixed. Lets look at the architectural requirements. The model can assume that an accessor is synchronous. But what does this mean? A rather terse definition is that the externally visible behavior of the system must be invariant along all sequentially constrained paths from the point of activation of the set accessor. In other words, the set operation must have completed by the time a get accessor is used (and the architecture must delay the get accessor until the set operation is complete.) But this is only required along 'paths of sequential constraint' What is a path of sequential constraint? It is the flow of the thread across elements of the model that *guarantee* sequential behavior. These are: dataflows, events and transitions. (The difference between events and transition, in this context, is that events are propagation of the calling thread while transitions are the intersection of a thread with the state model). Entry into an SDFD must also be sequentially constraining, otherwise their behavior would always be undefined (see below). If a get accessor can be traced back to a set accessor along any sequentially constrained path, then the result of the get accessor must be consistent with the result following a synchronous set accessor. So the set accessor is synchronous, but the point of synchronization is as late as possible. So far I've talked about sequentially constrained paths. Why? What is an unconstrained path? There are 2 models of time in OOA: simultaneous and concurrent. The difference is that the concurrent model defines additional modes of sequential constraint: only one action executes at a time, so any synchronous process must complete before the end of the action. To generalize the implicit bridge, we must forget about the concurrent mode, and work solely in the simultaneous mode. OOA says that actions take time to execute. we must assume from this that processes also take time to execute. Anyone who's worked with parallel systems knows that the result of reading an unsynchronized variable is undefined. It is not restricted to only the before/after values: it could be anything! (this is why an unsynchronized SDFD would always have undefined behavior). The same is true of attributes in the OOA. Any access to an attribute that does not lie on a path of sequential constraint from its set accessors is an unsynchronized access. So its result is undefined. It is often possible to detect unsynchronized accesses using a static analysis of the model. Once we have complete formal semantics for UML, we should start to see more tools that support this type of analysis (In ASIC design, static timing analysis is routine and, in most places, mandatory). So now lets go back to the robot model. The single domain model was: action (new_x, new_y) { x := new_x; y := new_y; generate R1: Robot_Move_Complete(); } And the bridging was defined so that the R1 event is not delivered until the robot move is complete. There is only one transition out of the state, and no one else generates the R1 event. So the only sequentially constrained path out of the state is along the event. Therefore any other access to the x and y attributes does not lie on a sequentially constrained path. Such an access would be unsynchronized, so the value returned is undefined in OOA. So there can be a long delay between the setting of the desired value of the attributes and the correct value being available. No properly synchronized access in the model can see this delay. So the model is correct. Of course, its easy to construct a multi-domain situation to which these arguments don't apply. I'd argue that an attempt to construct any implicit bridge that does not conform to the rules of OOA time is simply incorrect. In summary: use implicit bridging with events to allow you to simulate your models as isolated domains. Connecting other domains should not effect the behavior of solicited events, only their timing. This makes test harnesses simpler, and allows more powerful static analysis of the model. Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: Re: (SMU) Need help conceptualizing lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > > You mention a UI list of Dog IDs. As you describe it, there > > seems to be a screen with two lists, one of Dog IDs and > > one of Owner IDs, so that when you select one list entry the > > corresponding member of the other list is highlighted. I > > would probably have some 'XXX Screen' object for the display > > to manage both lists where 'XXX' describes the nature or > > purpose of that particular display (e.g., 'Ownership Screen'). > > ***AWOOGA***AWOOGA*** domain pollution, disinfect imediately!!! > > As soon as you start subtyping in one domain to reflect the > subject matter of the domains you expect to talk to, then you > lose the potential for domain reuse. Leon Starr put it nicely > when he said you should avoid using the same noun in 2 domains I do not follow this objection at all in this context. Were you reading the same message that I wrote? B-) Where am I subtyping objects in another domain? The only place where this is even superficially true would be a 'Ownership Screen' in the GUI domain vs. a Window object in Window manager domain. But those are clearly very different critters and the IM for GUI's Screen and Window Manager's Window would be very different. What noun is used in two domains? Even when I later speculated that there *might* be a 'Dog Screen' object, I went to the trouble of pointing out explicitly the difference between Dog (adjective) Screen (noun) and UI (adjective) _DOG (noun). I further pointed out that it was crucial to be able to ensure that such a distinction existed. > Use the 'user-interactions' domain do describe the principles > that underly the interactions. For exampe, the ability to click > on one item and have a linked item be highlighted. Then you > describe the actual user interface in the population of that > domain. > > If you have an object > > SCREEN(*name, ...) > > then you can populate it with instances named "Dog" and "Owner". Why? In the portion of the message you are objecting to, I am proposing neither. I am suggesting that what the user wants to see is a screen with two lists on it. It seems to me that when I name that entity 'Ownership Screen' I am addressing something quite different from the rest of the application. That 'something' is unique to the GUI domain, so I see no domain pollution at all. However, even with the speculation I made later that there might be a 'Dog Screen' and 'Owner Screen' in addition to the 'Ownership Screen', you seem to be suggesting that they are all instances of the same object, SCREEN. But they are not because they have different data, so in the OOA for the GUI domain they *must* be separate objects. It seems to me that your argument would only apply in the Window Manager domain where control objects would be separated from the Window objects that correspond to the GUI's screens, allowing the various screens to be differentiated at the instance level for a single Window object. But to get there you have to split up the GUI's Screen objects into multiple objects in the Window Manger domain that have quite different abstractions. > If possible, you should try to describe specifics using data, not > OOA. You use the OOA to define the meta-model that underlies your > user interface model. The OOA model ensures that the correct magic > happens when components of your specific-UI model interact. > > As a general rule when modelling, always try to push as much as > possible (*but no more!*) into the population of a domain. > Don't build yourself a universal computer, but do abstract away > enough details to keep the model clean. If the replacement of > dogs with cats requires a UI domain change then there's something > very wrong. If your data becomes a programming language, then > there's also something wrong. I have no problem with these statements, but they seem to be a nonsequitur to the point I was making. > If the GUI domain is too dumb, then perhaps it doesn't exist. > I'd look again at the domain chart an make sure that the > mission statements are in balence across the system. If you > have a simple domain with a fat interface then you can > almost guarentee that the domain chart is flawed. > > (You can probably live with it, and get a working system, but > that's engineering pragmatism, not the theoretical ideal). As I said elsewhere in the message, I think one can make a case for most GUI domains being smart bridges between the application and the Window Manager. Assuming that view, you probably don't want it to be too smart. OTOH, having an explicit, albeit dumb, OOA for a smart bridge has two advantages: you get the domain firewalls to isolate a notoriously troublesome interface when porting and you explicitly document an important aspect of the requirements (i.e., the user's actual view of the system). > > As I indicated above, one could handle the highlighting by > > navigating a relationship in the GUI or by asking the > > application who owns a dog. In the former case the IDs would > > be created when the instances were created, using techniques > > such as those suggested by Whipp. In the latter case the GUI > > domain could issue an external event with a synchronous > > return. The bridge would navigate the relationship in the > > application domain between DOG and DOG OWNER and return an > > Owner ID. > > I really do *not* like this second approach. It requires you > to perform a multi-domain simulation to validate the UI > domain. This is BAD. You can construct an active test harness, > but that too adds additional complexity to the testing of what > is a trivial domain. If you put the link object into the > domain then the domain becomes complete enough to be simulated > in isolation, and the test harness becomes trivial. If there > is complexity associated with a problem, then its better to > put it in the domain of that problem than outside it. I do not see why multidomain simulation is required for testing. We are talking about a synchronous wormhole here and those are ubiquitous in domain communications. We do this all the time in single domain simulation. You would simulate this the same way you would simulate any request for another domain's data. I agree that you need an infrastructure to supply the data when testing the domain, but you have to have that anyway. Besides, I let the simulator vendor worry about the harness; I just plug in the data. B-) [BTW, I think the wormhole formalism now makes such harnesses much more standardized and easier to use.] -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Implicit bridges lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... I suspect I have missed the point here completely because it seems all you have done is to restate what wormholes are about while severely restricting what one can do with single domain simulation. > In his presentation on bridging at the SMUG, Campbell McCausland spoke > of the need to eliminate interfaces from a model. He argued that mere > thought of the interface introduces coupling, however slight. The domain > should be its own universe. Explicit bridging may decouple the data > types and the flow of control, but it still contains the coupling that > says "and now something happens outside my universe". If a benevolent > god chooses to send in an external input then that is God's prerogative > But you should never solicit such an action. I missed McCausland's paper but the last statement of this summary bothers me. This seems to preclude things like handshaking protocols, synchronous data requests, or almost any type of significant service request. While I agree that interfaces introduce coupling, that coupling exists in the OOA whether the bridge is implicit or explicit. It appears in event definitions, in wait states, etc. Implicit events might marginally reduce the degree of coupling by better hiding some mechanisms, but the wormhole already abstracts those mechanisms pretty well. > action (new_x, new_y) > { > x := new_x; > y := new_y; > generate R1: Robot_Move_Complete(); > } > > The solicited event, R1, is explicitly generated. The bridge is > completely hidden from the domain model, and so it will now simulate as > a single domain. No test harness is necessary to generate the solicited > event. Either the action completes successfully (the attributes are > correctly set and the event generated) and the event is delivered, or > an exception situation has occurred (to be handed by the [sic] OOA > exception mechanism) It seems to me that you have overly restricted what can be simulated in single domain simulation. If the communications with the outside world are asynchronous, then one important thing to check in single domain simulation is the arrival of R1 at different points of the processing (i.e., by inserting it into the subsequent event queue in different positions). The explicit wormhole provides an excellent hook for a simulator to do this. But without it you have to build the bridge and hang a customized harness off it to test the domain properly -- something I thought you wanted to avoid. At another level, I don't like this because you are hiding crucial information to the analysis. The construct above provides no hint that R1 may be placed on the queue long after 'action' has completed. In fact, it does quite the opposite: it strongly suggests that it will be added to the queue before 'action' completes. Put another way, the fact that some processing is dependent upon an external service being completed is important analysis information, regardless of the communication mechanism, so it should be visible. Suppose R1 moves instance Ai from state A3 to state A4 and R1 is the only way to get there. Now suppose at some time in the future someone is adding an enhancement that requires Ai to move from A4 to A11. That developer might come along and add "generate R2..." right after the "generate R1..." in 'action'. [Let's not quibble about whether the event sequence is guaranteed; one can construct more complicated cases where it would be.] The model would always simulate correctly under your single domain approach but it would almost certainly be broken in the multi-domain case. In my view it is important to provide information that affects flow of control explicitly in the OOA so that this sort of error would be at least discouraged. With the implicit bridge it is actually encouraged. > Of course, its easy to construct a multi-domain situation to which > these arguments don't apply. I'd argue that an attempt to construct any > implicit bridge that does not conform to the rules of OOA time is > simply incorrect. I am confused here. R1 gets generated in 'action' regardless of the bridge. I thought the point of your implicit bridge was to provide a hidden SDFD that prevented any Gets from executing until the robot actually got to the Set position. That is, I thought that constructing the SDFD for the implicit bridge was the the point of the exercise (i.e., to conform to multi-domain time rules). > In summary: use implicit bridging with events to allow you to simulate > your models as isolated domains. Connecting other domains should not > effect the behavior of solicited events, only their timing. This makes > test harnesses simpler, and allows more powerful static analysis of the > model. While this approach might work for data accesses, I am not convinced it works for overall flow of control -- at least not without a lot more complexity in the underlying SDFD. The relatively common situation that comes quickly to mind is where there is processing that you want to continue while waiting for an external service. In this case you have to worry about instances being in the wrong state to accept R1 when it is processed. We accommodate this currently in the OOA with deliberate wait states. To eliminate these (i.e., to get rid of this manifestation of coupling in the OOA) with the implicit bridge mechanism you would have to somehow include state transitions in addition to data flows in your SDFD. This can get really nasty, I think (e.g., dealing with self-directed events that only temporarily move away from the acceptance state). No rush for your retort -- I am off for a few days vacation. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: RE: (SMU) Need help conceptualizing David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- > lahman wrote: > > As soon as you start subtyping in one domain to reflect the > > subject matter of the domains you expect to talk to, then you > > lose the potential for domain reuse. Leon Starr put it nicely > > when he said you should avoid using the same noun in 2 domains [Correction: Leon was quoting Campbell McCausland. Sorry, Campbell, for the incorrect attribution] > I do not follow this objection at all in this context. Were > you reading the same message that I wrote? B-) > > Where am I subtyping objects in another domain? The only > place where this is even superficially true would be a > 'Ownership Screen' in the GUI domain vs. a Window object > in Window manager domain. But those are clearly very > different critters and the IM for GUI's Screen and Window > Manager's Window would be very different. If I see XXX_SCREEN and YYY_SCREEN object, than my mind sees a supertype object, SCREEN. The differentiation between the 2 subtypes, in your proposal, is based on application domain concepts (dog and owner). Even if the supertype doesn't exist, the pollution is still there. > What noun is used in two domains? Even when I later > speculated that there > *might* be a 'Dog Screen' object, I went to the trouble of > pointing out > explicitly the difference between Dog (adjective) Screen (noun) and UI > (adjective) _DOG (noun). I further pointed out that it was > crucial to be > able to ensure that such a distinction existed. The fact that 'Dog' is used as an adjective is irrelevent. If I want an application that links Cats to their Owners then I can't reuse your UI. I'd have to add another object, CAT_SCREEN, whose behaviour is likely to be very similar to DOG_SCREEN. > Why? In the portion of the message you are objecting to, I > am proposing neither. I am suggesting that what the user wants > to see is a screen with two lists on it. It seems to me that > when I name that entity 'Ownership Screen' I am addressing > something quite different from the rest of the application. > That 'something' is unique to the GUI domain, so I see no domain > pollution at all. If you think that the two screen have diffent behaviours, then name the "Selection Screen" and "Output Screen" (or similar). Then the UI is reusable in any situation that requires a these concepts. Use data to customise the 2 screens to the actual application. > As I said elsewhere in the message, I think one can make a > case for most GUI domains being smart bridges between the > application and the Window Manager. Assuming that view, you > probably don't want it to be too smart. OTOH, having an > explicit, albeit dumb, OOA for a smart bridge has > two advantages: you get the domain firewalls to isolate a > notoriously troublesome interface when porting and you > explicitly document an important aspect of the > requirements (i.e., the user's actual view of the system). Yes, it is traditionally difficult to properly isolate the subject matter of the UI domain. IMHO, 2 examples of really bad UI domains are the 2 leading CASE tools for the SM method. They both have an extremely simplistic user interaction model. The fact that its a difficult domain should lead you to work very hard on the domain, to give it intelligent behaviour: not to dismiss it as being simply a "smart bridge". > I do not see why multidomain simulation is required for > testing. We are talking about a synchronous wormhole > here and those are ubiquitous in domain communications. > We do this all the time in single domain simulation. > You would simulate this the same way you would simulate > any request for another domain's data. You cannot justify an interface on the basis of "There are a lot of other interfaces". In this scenario, there is no need to request data from another domain. So why do it? Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: RE: (SMU) Need help conceptualizing David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- This message is in MIME format. Since your mail reader does not understand this format, some or all of this message may not be legible. ------_=_NextPart_000_01BF10EF.0C59B1FA Content-Type: text/plain; charset="iso-8859-1" > Allen Theobald writes to shlaer-mellor-users: > -------------------------------------------------------------------- > Would someone help code this up? I am having trouble conceptualizing! > (esp. the bridge). Here's a *very* simplistic coding. It only does what you specified, so it probably isn't as complete as you might wish. I don't, for example, maintain a bidirection link between the UI instances and the application instances. I do, however, use my automatically updated (M) attributes (we had the DFD thread for these a few months ago) t ocontrol the is_highlighted attribute. Also, my underlying display system is not a windowing system: I've used domain replacement to attach the UI to an iostream. (I hope attachments work on this list: they're all ASCII. I've only compiled it on egcs 2.91.66. Hopefully it'll work elsewhere, too) ------_=_NextPart_000_01BF10EF.0C59B1FA Content-Type: application/octet-stream; name="main.cpp" Content-Disposition: attachment; filename="main.cpp" #include "application.h" #include "user_interface.h" using namespace application; int main() { Owner smith("Mr Jones"); Owner jones("Mrs Smith"); Owner doe("Mr Doe"); Dog fido("fido"); Dog pluto("pluto"); Dog ben("ben"); Dog goldie("goldie"); Dog fred("fred"); fido.setOwner("Mr Jones"); pluto.setOwner("Mr Jones"); ben.setOwner("Mrs Smith"); goldie.setOwner("Mrs Smith"); fred.setOwner("Mr Doe"); user_interface::select_item("ben"); user_interface::deselect_item("ben"); user_interface::select_item("fred"); user_interface::deselect_item("fred"); user_interface::select_item("Mrs Smith"); fred.setOwner("Mrs Smith"); fido.setOwner("Mrs Smith"); goldie.setOwner("Mr Jones"); { Dog hunter("hunter"); hunter.setOwner("Mrs Smith"); } user_interface::deselect_item("Mrs Smith"); { ben.setOwner("Mrs Simpson"); Owner simpson("Mrs Simpson"); user_interface::select_item("Mrs Simpson"); } return 0; } ------_=_NextPart_000_01BF10EF.0C59B1FA Content-Type: application/octet-stream; name="application.h" Content-Disposition: attachment; filename="application.h" #include // counterparts namespace user_interface { class ListItem; class ListItemLink; } namespace application { // 1. DOG (D) class Dog { public: // *name Dog(string name); ~Dog(); // .owner(R1) : string void setOwner(string owner); string getOwner() const; //... private: string _name; string _owner; user_interface::ListItem *_ui_dog_counterpart; user_interface::ListItemLink *_ui_r1_counterpart; }; // 2. OWNER (O) class Owner { public: // *name Owner(string name); ~Owner(); // ... private: string _name; user_interface::ListItem *_ui_owner_counterpart; }; } ------_=_NextPart_000_01BF10EF.0C59B1FA Content-Type: application/octet-stream; name="application.cpp" Content-Disposition: attachment; filename="application.cpp" #include "application.h" #include "user_interface.h" using namespace application; // 1. DOG (D) : *name : string Dog::Dog(string name) : _name(name), _ui_dog_counterpart(new user_interface::ListItem(name)), _ui_r1_counterpart(0) {} Dog::~Dog() { delete _ui_dog_counterpart; delete _ui_r1_counterpart; } // Dog.owner(R1) : string void Dog::setOwner(string owner) { _owner = owner; delete _ui_r1_counterpart; if (!_owner.empty()) { _ui_r1_counterpart = new user_interface::ListItemLink(_name, _owner); } else { _ui_r1_counterpart = 0; } } string Dog::getOwner() const { return _owner; } // 2. OWNER (O) : *name : string Owner::Owner(string name) : _name(name), _ui_owner_counterpart(new user_interface::ListItem(name)) {} Owner::~Owner() { delete _ui_owner_counterpart; } ------_=_NextPart_000_01BF10EF.0C59B1FA Content-Type: application/octet-stream; name="user_interface.cpp" Content-Disposition: attachment; filename="user_interface.cpp" #include #include #include #include "user_interface.h" using namespace user_interface; // 1. LIST_ITEM (LI) : *name : string ListItem::ListItem(string name) : _name(name), _is_selected(false) { cout << "ListItem(" << _name << ") created" << endl; instances.push_back(this); updateIsHighlighted(); } ListItem::~ListItem() { cout << "ListItem(" << _name << ") deleted" << endl; instances.remove(this); updateLinkedPeers(); } string ListItem::getName() const { return _name; } // .is_selected : bool void ListItem::setIsSelected(bool arg) { if (arg != _is_selected) { cout << "ListItem(" << _name << ").is_selected = " << arg << endl; _is_selected = arg; updateLinkedPeers(); } } bool ListItem::getIsSelected() const { return _is_selected; } // .is_highlighted (M) : bool void ListItem::updateIsHighlighted() { bool result = false; for ( list::const_iterator iter = ListItemLink::instances.begin(); iter != ListItemLink::instances.end(); iter++) { ListItem *peer = (*iter)->getPeer(_name); if (peer) { if (peer->getIsSelected()) { result = true; break; } } } setIsHighlighted(result); return; } void ListItem::setIsHighlighted(bool arg) { if (arg != _is_highlighted) { _is_highlighted = arg; cout << "ListItem(" << _name << ").is_highlighted = " << arg << endl; } } bool ListItem::getIsHighlighted() const { return _is_highlighted; } void ListItem::updateLinkedPeers() { for ( list::const_iterator iter = ListItemLink::instances.begin(); iter != ListItemLink::instances.end(); iter++) { ListItem *peer = (*iter)->getPeer(_name); if (peer) { peer->updateIsHighlighted(); } } } list ListItem::instances; // 2. LIST_ITEM_LINK (LIL) // * item1 : string // * item2 : string ListItemLink::ListItemLink(string item1, string item2) : _item1(item1), _item2(item2) { cout << "ListItemLink(" << _item1 << ", " << _item2 << ") created" << endl; instances.push_back(this); updatePeers(); } ListItemLink::~ListItemLink() { cout << "ListItemLink(" << _item1 << ", " << _item2 << ") deleted" << endl; instances.remove(this); updatePeers(); } void ListItemLink::updatePeers() { ListItem *li; li = getPeer(_item1); if (li) li->updateIsHighlighted(); li = getPeer(_item2); if (li) li->updateIsHighlighted(); } //symetric navigation of reflexive relationship ListItem *ListItemLink::getPeer(string name) { string peer_name; if (_item1 == name) peer_name = _item2; if (_item2 == name) peer_name = _item1; if (!peer_name.empty()) { for (list::const_iterator iter = ListItem::instances.begin(); iter != ListItem::instances.end(); iter++) { if ((*iter)->getName() == peer_name) { return *iter; } } } return 0; } list ListItemLink::instances; void user_interface::select_item(string arg) { for (list::const_iterator iter = ListItem::instances.begin(); iter != ListItem::instances.end(); iter++) { if ((*iter)->getName() == arg) { (*iter)->setIsSelected(true); return; } } } void user_interface::deselect_item(string arg) { for (list::const_iterator iter = ListItem::instances.begin(); iter != ListItem::instances.end(); iter++) { if ((*iter)->getName() == arg) { (*iter)->setIsSelected(false); return; } } } ------_=_NextPart_000_01BF10EF.0C59B1FA Content-Type: application/octet-stream; name="user_interface.h" Content-Disposition: attachment; filename="user_interface.h" #ifndef user_interface_h #define user_interface_h #include #include namespace user_interface { // 1. LIST_ITEM (LI) class ListItem { public: // *name : string ListItem(string name); ~ListItem(); string getName() const; // .is_selected : bool void setIsSelected(bool is_selected); bool getIsSelected() const; // .is_highlighted (M) : bool bool getIsHighlighted() const; // updates to (M) attribute: is_highlighted void updateIsHighlighted(); void setIsHighlighted(bool is_highlighted); void updateLinkedPeers(); // instances static list instances; private: string _name; bool _is_selected; bool _is_highlighted; }; // 2. LIST_ITEM_LINK (LIL) class ListItemLink { public: // * item1 : string // * item2 : string ListItemLink(string item1, string item2); ~ListItemLink(); //symetric navigation of reflexive relationship R1 void updatePeers(); ListItem* getPeer(string name); //instances static list instances; private: string _item1; string _item2; }; void select_item(string name); void deselect_item(string name); } #endif ------_=_NextPart_000_01BF10EF.0C59B1FA-- Subject: Re: (SMU) Need help conceptualizing Allen Theobald writes to shlaer-mellor-users: -------------------------------------------------------------------- Greetings! I greatly enjoy and respect the people on this list. No question is too basic to ask! I am an example-oriented learner, so your code is genuinely appreciated. Thank you so much! And thanks to all who contributed to the discussion. Who would have thought that such a simple question would have brought on such a debate? :^) Every time I read this list I pick up something new. This list is truly an asset! Kind Regards, Allen Theobald Nova Engineering, Inc. P.S. The attachments came through fine and compiled as well in Visual C++ 6.0 Service Pack 2. Subject: RE: (SMU) Implicit bridges David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > Responding to Whipp... > > I suspect I have missed the point here completely because it seems all you > have done is to restate what wormholes are about while severely restricting > what one can do with single domain simulation. I think you are right. I don't see where I have restricted anything. I've simply elaborated the implications of the rules that currently exist. If you want your model to run ony any conceivable valid architecture than all the restrictions already exist. The only significant restriction that I am added is to require you to abandon concurrent time and to always use simultaneous time. > > In his presentation on bridging at the SMUG, Campbell McCausland spoke > > of the need to eliminate interfaces from a model. He argued that mere > > thought of the interface introduces coupling, however slight. The domain > > should be its own universe. Explicit bridging may decouple the data > > types and the flow of control, but it still contains the coupling that > > says "and now something happens outside my universe". If a benevolent > > god chooses to send in an external input then that is God's prerogative > > But you should never solicit such an action. > > I missed McCausland's paper but the last statement of this summary bothers > me. That last sentence goes slightly beyond where Campbell was willing to go in his talk. He's not quite as extreme as I am on this matter. At least, not in public. He simply pointed out the such solicitation is a form of coupling, and that it would be nice to get rid of it, if possible. > This seems to preclude things like handshaking protocols, synchronous > data requests, or almost any type of significant service request. OOA provides both synchronous and asynchronous processes (accessors and event generators). The argument is that you should never need to make service requests. You model the behaviour of your subject matter and worry about its implementation later. Any behaviour that is visible in your subject matter is, in a sense, part of that subject matter. So it should be modeled in the domain. I find the rules of OOA useful when attempting to tease the essence of a subject matter out of its polluted conception. If I can see a behaviour as an implementation of an OOA modeling element (e.g. the Robot Move is an implementation of the event transition) then that part of the behaviour can be abstracted away. > While I agree that interfaces introduce coupling, that coupling exists in > the OOA whether the bridge is implicit or explicit. It appears in event > definitions, in wait states, etc. Implicit events might marginally reduce > the degree of coupling by better hiding some mechanisms, but the wormhole > already abstracts those mechanisms pretty well. You only need these wait states, etc. because you are modeling the implementation. If you start introducing these things, then you are fighting the OOA formalism, not leveraging it. > > action (new_x, new_y) > > { > > x := new_x; > > y := new_y; > > generate R1: Robot_Move_Complete(); > > } > It seems to me that you have overly restricted what can be simulated in > single domain simulation. If the communications with the outside world are > asynchronous, then one important thing to check in single domain simulation > is the arrival of R1 at different points of the processing (i.e., by > inserting it into the subsequent event queue in different positions). The > explicit wormhole provides an excellent hook for a simulator to do this. > But without it you have to build the bridge and hang a customized harness > off it to test the domain properly -- something I thought you wanted to > avoid. No, you don't need a test harness. The OOA meta model defines the possible behaviour of the event. When you use the domain in a system, the possible behaviour is restricted further. There is no new behaviour introduced by the implementation that was no permitted in the sinle-domain world. Yes, if you want to ensure that your model will work on any architecture, then you need to simulate with different delay models. It should be possible to do a lot of checks using static methods. Dynamic verification will probably use some form of Monticarlo simulation. SES used to have quite a nice simulator where you could define statistical delay models for simulation. > At another level, I don't like this because you are hiding crucial > information to the analysis. The construct above provides no hint that R1 > may be placed on the queue long after 'action' has completed. In fact, it > does quite the opposite: it strongly suggests that it will be added to the > queue before 'action' completes. Put another way, the fact that some > processing is dependent upon an external service being completed is > important analysis information, regardless of the communication mechanism, > so it should be visible. No, No, No. The generation of the self-directed event does nothing more than guarentee that no other events will be processed by the state until the event is delivered. We can choose, at the system level, to not deliver the event until the robot move is complete. But this delay is a subset of the permitted behaviour under OOA. If you think that a self generated event implies zero (or small) time, then you thinking os architectually polluted. The only permitted variation is the timing, not the function. So there is no reason to expose it in the OOA. I would allow one deviation to aid analysts: I would like to have the ability to mark some attributes as being synchronised. This would permit the analyst to state that an attributes accessors always act as-if they are atomic. (I'd also add an atomic read-modify-write process). > Suppose R1 moves instance Ai from state A3 to state A4 and R1 is the only > way to get there. Now suppose at some time in the future someone is adding > an enhancement that requires Ai to move from A4 to A11. That developer > might come along and add "generate R2..." right after the "generate R1..." > in 'action'. [Let's not quibble about whether the event sequence is > guaranteed; one can construct more complicated cases where it would be.] > The model would always simulate correctly under your single domain approach > but it would almost certainly be broken in the multi-domain case. In my > view it is important to provide information that affects flow of control > explicitly in the OOA so that this sort of error would > be at least discouraged. With the implicit bridge it is actually > encouraged. This change may break the bridge, but it doesn't break the domain. You simply need to find a way to construct a new bridge that does not break the time rules of OOA. If you develop a hideously complex state model, then keeping track of all the potentially unsynchronised accesses to attributes will be a problem. If your single domain model contains such accesses, then implicit bridging becomes dangerous. But my argument that the single-domain model is incorrect if it contains such accesses, because there are potentially correct architectures on which it will not work. > > Of course, its easy to construct a multi-domain situation to which > > these arguments don't apply. I'd argue that an attempt to construct any > > implicit bridge that does not conform to the rules of OOA time is > > simply incorrect. > > I am confused here. R1 gets generated in 'action' regardless of the > bridge. I thought the point of your implicit bridge was to provide a hidden > SDFD that prevented any Gets from executing until the robot actually got to > the Set position. That is, I thought that constructing the SDFD for the > implicit bridge was the the point of the exercise (i.e., to conform to > multi-domain time rules). Where did I construct the SDFD? My statement was simply that if you construct an incorrect model, or an incorrect bridge, then it is incorrect. The method cannot be expected to infer correct behaviour onto an incorrect model. > > In summary: use implicit bridging with events to allow you to simulate > > your models as isolated domains. Connecting other domains should not > > effect the behavior of solicited events, only their timing. This makes > > test harnesses simpler, and allows more powerful static analysis of the > > model. > > The relatively common situation that comes quickly to mind is where there is > processing that you want to continue while waiting for an external service. > In this case you have to worry about instances being in the wrong state to > accept R1 when it is processed. We accommodate this currently in the OOA > with deliberate wait states. To eliminate these (i.e., to get rid of this > manifestation of coupling in the OOA) with the implicit bridge mechanism you > would have to somehow include state transitions in addition to data flows in > your SDFD. This can get really nasty, I think (e.g., dealing with > self-directed events that only temporarily move away from the acceptance > state). Again, we don't need SDFDs for the implicit bridge (though they might exist in the server). If you have wait states, etc., then your model is probably polluted. If you simply assume that external activities are complete by the time that you need their results, then things become simple. As long as you can define the expected result in the single- domain case, then this assumption is feasible. My whole discussion on the meaning of "invarient behaviour over sequentially constrained paths" was intented to cover the need for parallel activities during the activity of an implicitly bridged process. My conclusion is that you need to detect and eliminate all unsynchronised accesses to attributes. If your model is clean, then implicit bridging becomes simpler. There are still some cases where implicit briding is impossible. Consider a synchronous random number generator. There are no constructs in OOA that allow you to have a process whose behaviour is not completely predictable based on its inputs (plus the state of the domain). In this case, a synchronous wormhole needs to be defined as a transform with side-effects. But these situations are relatively rare. I cannot find any places in the ODMS domain where explicit bridging is required. Once we get a well defined exception mechanism in OOA, the number of cases will drop again. Without an exception mechanism, we cannot model alternate paths through a lifecycle based on failure of specific parts of the lifecycle. We need wormholes + conditional dataflows to detect the failure and respond to it. This is like checking errno in C: easy to get wrong. Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: Re: (SMU) Need help conceptualizing Allen Theobald writes to shlaer-mellor-users: -------------------------------------------------------------------- I spent this morning looking at your code (thanks again) and was wondering: is ListItem and ListItemLink the actual bridge? Are there "preferred" ways of constructing bridges? E.g. multiple inheritance, mixins, etc. What are some of the different ways to create bridges? Apologies if these have already been asked. No wait! I know the archives are available via the "get" command, but is there any way to search it? Kind Regards, Allen Subject: RE: (SMU) Need help conceptualizing David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- > Allen Theobald wrote: > I spent this morning looking at your code (thanks again) and was > wondering: is ListItem and ListItemLink the actual bridge? Nice question, but I'd say no. I'll elaborate below. > Are there "preferred" ways of constructing bridges? E.g. multiple > inheritance, mixins, etc. What are some of the different ways to > create bridges? You will see that the appication domain has a few pointers into the ui domain (the "counterparrt" members). It is these pointers that are the projection of the bridge into implementation. The bridge itself is not visible as an independent entity. An interesting point occurs when you look at the interface that the application domain actually uses when talking to the UI. I could create interfaces (base classes) named Object and Relationship and use these instead of ListItem and ListItemLink within the application code. I can then define, in the UI domain,: class ListItem : public SM_Object { ... }; class ListItemLink : public SM_Relationship { ... }; //... class SM_Relationship : public SM_Object {...} We would then refactor the UI classes to move the navigation and instances collections into the base classes. This enables us to use the UI domain as the architecture. We then make 1 last change: class Dog : public ListItem // ListItem is-a SM_Object { // ... private: SM_Relationship *_R1_counterpart; } (Those application classes that aren't mapped to the user interface would simply inherit from SM_Object). I can use factory classes to encapsulate the bridge still further Once I inherit Dog from ListItem, which inherits from Object, ListItem can define interfaces that Dog (and Owner) must implement. Also, ListItem always has a pointer to its client. This gives me the bidirectional link: Dog (and Owner) think that thier base classes are architectural, so only use the SM_Object interface (except for creation). ListItem sits between the application and the architecture, adding additional behaviour where needed. The beauty of implicit bridging is that server domains are architectures. I don't think there is any prefered way to do bridging. It depends on your requirements. I gave you code that was about as simple as possible for the problem you presented. I always find that a good place to start. Its not too difficult to write a code generator to produce that style. Then you can evolve the code generator towards a more realistic implementation. There are many reasons for creating explicit code for a bridge. You may need to decouple different control flow (e.g. asynch impl of sychronous call). You may decide that you want to create explicit 'client' base classes to avoid direct inheritance beteen objects in 2 domains. You may decide to put counterpart factories in the bridge. As soon as you move away from the trivial, the option count explodes. > Apologies if these have already been asked. No wait! I know the > archives are available via the "get" command, but is there any way > to search it? Last time I tried, I couldn't access the archives. It'd be really nice to have them visible on the web. Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: RE: (SMU) Implicit bridges David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- > The relatively common situation that comes quickly to > mind is where there is processing that you want to > continue while waiting for an external service. A problem that you probably aren't thinking of here, but which, IMHO, is important, is that self-directed events stall the state machine until the event is delivered. That prevents continued processing. There are 2 solutions: Either you bounce the event off another object (very unsatisfactory), or don't use expediated self-directed events. The rules about self-directed evetns are a hack introduced to gloss over a weakness in the OOA behaviour formalism. It would be nice to treat all events as equal (none more equal than others), and to introduce a specific mechanism to support non-event state transition. UML has the concept of the unlabeled transition (possibly guarded) which could be used for this. > In this case you have to worry about instances being > in the wrong state to accept R1 when it is processed. > We accommodate this currently in the OOA with deliberate > wait states. Nothing new here. You sometimes have to do things like this for internal events. > To eliminate these (i.e., to get rid of this manifestation of > coupling in the OOA) with the implicit bridge mechanism you > would have to somehow include state transitions in addition > to data flows in your SDFD. This can get really nasty, I > think (e.g., dealing with self-directed events that only > temporarily move away from the acceptance state). I don't really see what you're getting at here. When you use an implicit bridge, the client domain doesn't need an SDFD. And you wouldn't put this stuff in the server's. Just accept that the OOA time rules tell you that the event can be delivered at a variety of times, and handle this in the state models. Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: (SMU) Why Should I? Chris Curtis writes to shlaer-mellor-users: -------------------------------------------------------------------- I have a question regarding OOA/RD, especially as concerns architecture. I have some background in S-M OOA; most of the state modeling and RD is still a bit new to me, so I'm looking for a little bit of exposition. One of the subdomains I have identified in the system we are currently analysing is a Transactions domain. Major objects in this domain are Transaction, Customer, Batch, User, Transaction Code. Transactions are created in groups (Batches) which are eventually closed. At some point in time the closed Batches are processed, which results in the individual Transactions being posted and the Customer balances being updated appropriately. The OIM is not really too fleshed out at this stage - it's only been a couple of days of analysis - but this should be enough to get a general idea. It's a fairly common construct that I'm sure is used in lots of accounting-type systems. So...to my real question. I have modeled some basic states for things like Transaction and Batch. Now the traditional implementation would be to just stick everything in a database and run a process over the database on a regular schedule to do all the posting and such. The question is why should I bother implementing an architecture and so forth, with objects sending messages to each other, and all that? Isn't that just too much added complexity for such an essentially simple process? What does it gain me? *I* think that the primary advantages are separation of application from implementation, as well as formalism. The possibility of code generation is an added bonus, but an expensive (in tool costs) one. Can anyone enlighten me further? I should add that this is pretty much the ongoing battle with some old-school folks. --Chris ------------------------- Chris Curtis Systems Engineer "Where am I, and what am I doing in this handbasket?" Subject: RE: (SMU) Why Should I? "Peter J. Fontana" writes to shlaer-mellor-users: -------------------------------------------------------------------- > -----Original Message----- > [mailto:owner-shlaer-mellor-users@projtech.com]On Behalf Of Chris Curtis > Sent: Tuesday, October 12, 1999 9:26 PM > ...The question is why > should I bother implementing an architecture and so forth, with objects > sending messages to each other, and all that? Isn't that just too much > added complexity for such an essentially simple process? What > does it gain me? I'm not sure I get the gist of the question - let me see if I can "repeat" it: You do not see why you should implement your system as a set of Finite State Machines. Assuming I'm somewhat on the mark - let me cut at this at two levels: a) perhaps you assume that a Shlaer-Mellor implementation has a separate O/S thread of control for each active object b) perhaps you see the most straightforward implementation for this domain is fundamentally synchronous (function call->function call, as opposed to FSM->event->FSM) Working a) first, this assumption is not true. The most common simple implementation of a Shlaer-Mellor system is to have a single thread where there is an event loop with a single queue that dispatches events to all active instances. Regarding b), the OOA-97 paper (from Kennedy-Carter, www.kc.com) defines the notion of analysis-explicit domain and object based services (like methods), which are a straightforward means of expressing state-independent behavior through synchronous "function" calls. The current use of UML in the OOA-RD world further reinforces these services. Here at Pathfinder, we work with a lot of new projects, and finding the right balance between state models and synchronous services is difficult for some. Our quick tip here is to understand why an "object lifecycle" (per the Shlaer-Mellor book) is not just any old state model, and model the state-dependent behavior in object lifecycles. I hope I was aiming in the right direction. Subject: Re: (SMU) Why Should I? lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Curtis... > One of the subdomains I have identified in the system we are currently > analysing is a Transactions domain. Major objects in this domain are > Transaction, Customer, Batch, User, Transaction Code. Transactions are > created in groups (Batches) which are eventually closed. At some point in > time the closed Batches are processed, which results in the individual > Transactions being posted and the Customer balances being updated > appropriately. It would help to know what the specific mission of the domain and what services it is supposed to provide within the application context. > The OIM is not really too fleshed out at this stage - it's only been a > couple of days of analysis - but this should be enough to get a general > idea. It's a fairly common construct that I'm sure is used in lots of > accounting-type systems. > > So...to my real question. I have modeled some basic states for things like > Transaction and Batch. I worry about this. I think the OIM should be completed before launching into state modeling. You might decide to complete only a portion of it relevant to certain features if you are doing incremental development, but that portion should be completed. An OIM represents the static model and there is no reason why that cannot be completed before the dynamic state machines. That is not to say you might not have to iterate back with modifications once you actually do the dynamic portion. The idea is that you should not move on to state machines until the relevant portions of the OIM are as well defined as you can make them at that point in time. Typically, though, such iterative modifications occur because when doing the dynamic model you discover something in the problem space that you overlooked on the first path. > Now the traditional implementation would be to just > stick everything in a database and run a process over the database on a > regular schedule to do all the posting and such. The question is why > should I bother implementing an architecture and so forth, with objects > sending messages to each other, and all that? Isn't that just too much > added complexity for such an essentially simple process? What does it gain me? > > *I* think that the primary advantages are separation of application from > implementation, as well as formalism. The possibility of code generation > is an added bonus, but an expensive (in tool costs) one. Can anyone > enlighten me further? I agree -- that is pretty much the goal of the methodology. When engaged in this type of discussion I usually fall back on the following points... (1) If there is ever a chance that you will have to port to another platform, this is a huge benefit because the solution logic of the OOA is invariant. This isolates changes and failures in the new environment to the architectural code. (2) The architecture allows substantial reuse on a single platform. If it is done properly, every new application/enchancement can use it as is. (3) The decoupling of logical function from implementation is pretty much the same thing as decoupling two components (domains). The systems will be more robust and maintainable if they are decoupled through a data-only bridge. The separation of design and implementation provides a similar encapsulation of function. In my view this is a very strong argument. Everyone who does OT will express the notion that you are eliminating 'spaghetti code' by using encapsulation, decoupling, etc. But if you have tendrils of platform-specific implementation running through your solution description, isn't that just 'spaghetti code' at the design level? Doesn't it seem both intuitive and aesthetically pleasing to segregate those issues so that one can focus on each individually? (4) By eliminating the implementation issues from the OOA one obtains a much simpler representation of the system logic. This is much easier to understand and it makes maintenance much easier. We are able to isolate a logical problem to a single action or synchronous service about 80% of the time just looking at the models before going near the debugger or simulator. It also makes it much easier to add enhancements. Take a look at a system designed for maintainability using the ideas in John Lakos' book, "Large Scale C++ Design". It will be littered with classes that exist solely to isolate implementation issues so that it becomes difficult to recognize the pure solution logic. These disappear from the OOA using S-M, making it much easier to grok. Similarly, if there is an implementation problem, you are focused on those issues in the architectural models. (5) You have to address the architectural issues anyway. One way to view the architecture is that it makes life simpler. This is really a combination of (2) and (3). What is an architecture, really? It is a collection of tools that allow translation of the models. If you are doing manual code generation, you would most likely start with a collection of templates where the programmer fills in the placeholders. The next step is to have some text processing language scripts fill in some of the placeholders for you. Then you add a few library routines for doing things like event queue managers, navigating relationships, etc. This is all stuff that developers might write themselves if they weren't focused on getting the application-of-the-moment out the door. They still do much of the same thing via cut&paste. A good example is a simple synchronous architecture where an event simply becomes a direct call to a state action routine. We did our first application this way (our CASE tool didn't have a code generator) and the the vast majority of it was done coding directly from the OOA diagrams in Visual C++. Our next application was truly asynchronous so we added a couple of library classes for the event manager and modified some naming conventions. While we were at it we gussied up the templates a bit and added some infrastructure to automatically create certain types of bridges. This all made coding easier for a relatively small investment in time. > I should add that this is pretty much the ongoing battle with some > old-school folks. It always is. But (5) may be useful here. By adopting the methodology the developers get to schedule time to make tools that will reduce a lot of the tedium in coding. For example, in your architecture you may decide that a particular relationship needs a two-way smart pointer. Now you can justify the time and effort to make a class for that pointer that can be reused whenever you need one again rather than hard wiring some kludge into today's application code. At the same time your OOA remains uncluttered because that class is not part of the basic solution logic so it is documented with the rest of the architecture. Put another way, the most likely place to apply design patterns with high reuse is in the architecture dealing with repetitive constructs like relationship navigation, opening database transactions, etc.. One way to view the architecture is that it actually implements platform-specific design patterns in a reusable manner. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Need help conceptualizing lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > If I see XXX_SCREEN and YYY_SCREEN object, than my mind sees a > supertype object, SCREEN. The differentiation between the 2 > subtypes, in your proposal, is based on application domain > concepts (dog and owner). Even if the supertype doesn't exist, > the pollution is still there. OK, there probably would be a supertype. But I still don't get the leap to domain pollution. There is nothing that prevents you from having different abstractions of the same underlying entity in two domains. The example used in the PT classes is a Train Icon in a UI domain and a Train object in another domain. In this case, though, things are even more tenuous. A Screen is a bag of data values with none of the semantic content of the DOG in the application. They could have been 'Screen A' and 'Screen B' just as easily. The whole point of a Screen in the GUI domain is to display a clutch of data that the end user thinks is related in some context. Associating the data via naming conventions with a Dog or Dog Owner simply reflects the user's perception of what relates the data. Where that data comes from in the application is completely irrelevant when deciding what Screen objects should be in the OIM. > The fact that 'Dog' is used as an adjective is irrelevent. If I > want an application that links Cats to their Owners then I can't > reuse your UI. I'd have to add another object, CAT_SCREEN, whose > behaviour is likely to be very similar to DOG_SCREEN. True and I agree with the basic idea vis a vis domains in general. Such a GUI screen would have limited reuse -- only in applications that also dealt with dogs. One could get around this by calling the screens something generic, but that would not be how the user views them. However, I would be skeptical that would solve the problem anyway because I think GUI domains are a special situation. By its nature a GUI domain is specific to an application because it reflects how a user will use that particular application. A GUI domain for an ATC is unlikely to be reusable without modification for a train scheduler even, much less an ATM. The only domain where things become sufficiently generic is at the lower implementation/architecture level of a Window Manager. As I indicated previously I tend to think of UI domains as documentation of smart bridges that make requirements traceability easier to manage when moving data between the application and the Window Manager. > > Why? In the portion of the message you are objecting to, I > > am proposing neither. I am suggesting that what the user wants > > to see is a screen with two lists on it. It seems to me that > > when I name that entity 'Ownership Screen' I am addressing > > something quite different from the rest of the application. > > That 'something' is unique to the GUI domain, so I see no domain > > pollution at all. > > If you think that the two screen have diffent behaviours, then > name the "Selection Screen" and "Output Screen" (or similar). > Then the UI is reusable in any situation that requires a > these concepts. Use data to customise the 2 screens to the > actual application. Ah, but the problem as I read it stated it was that the user wants to see the two lists on a single screen. I don't think we want to redesign the customer's view of the world so that our domain is reusable. B-) > Yes, it is traditionally difficult to properly isolate the > subject matter of the UI domain. IMHO, 2 examples of really > bad UI domains are the 2 leading CASE tools for the SM > method. They both have an extremely simplistic user > interaction model. The fact that its a difficult domain should > lead you to work very hard on the domain, to give it > intelligent behaviour: not to dismiss it as being simply a > "smart bridge". I tend to agree with you about S-M CASE tool GUIs. Ours has screen choices around opening and closing transactions in the underlying database! To the extent that this is true I would argue that the real problem is Product Out. The tool developers set out to make CASE tools for S-M and the GUIs are driven by their solutions or, at best, their perceptions of what would be useful displays. It is only now that the vendors are starting to accept Voice Of the Customer feedback. But I would still argue that what a GUI domain is about is moving piles of data back and forth between the Window Manager and the rest of the application. To me that is still pretty much a bridge. When there are issues around RDB transaction processing, Undo, etc. it may have to get smarter and cache some data, but it still just moves data between domains. > > I do not see why multidomain simulation is required for > > testing. We are talking about a synchronous wormhole > > here and those are ubiquitous in domain communications. > > We do this all the time in single domain simulation. > > You would simulate this the same way you would simulate > > any request for another domain's data. > > You cannot justify an interface on the basis of "There are > a lot of other interfaces". In this scenario, there is no > need to request data from another domain. So why do it? Precisely because this domain should not understand the semantics of relationships between dogs and dog owners, much less mimic them. That *would* be pollution for a GUI domain, IMO. I believe a relationship between 'Dog Screen' and 'Ownership Screen' should exist only to describe GUI relationships (e.g., something like the fact that 'Dog Screen' can be invoked from a control on 'Ownership Screen'). -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Need help conceptualizing lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Theobald... > Who would have thought that such a > simple question would have brought on such a debate? At least the debate is about modeling practices rather than what the notation really means. I kind of miss OTUG and those marvelous discussions about the nuances of an aggregation relationship vs. a composition relationship. B-) Besides, for Whipp and I there are no simple questions. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Why Should I? Chris Curtis writes to shlaer-mellor-users: -------------------------------------------------------------------- > Responding to lahman... > > It would help to know what the specific mission of the domain and > what services it is supposed to provide within the application > context. > The Transaction domain is supposed to provide support for other domains to enter, modify, and post batches of transactions. A Transaction is a single action (on a Customer's account, in this case). Probably need to clarify the domain boundary there... > I worry about this. I think the OIM should be completed before > launching into state modeling. You might decide to complete only > a portion of it relevant to certain features if you are doing > incremental development, but that portion should be completed. > An OIM represents the static model and there is no reason why > that cannot be completed before the dynamic state machines. > I should probably back up here and state that at this point I am trying to take an example portion of the system all the way through the OOA/RD process as a demonstration. So far the entire system has had around 2 days of analysis....so ordinarily I would not be at this point yet. > That is not to say you might not have to iterate back > with modifications once you actually do the dynamic portion. > The idea is that you should not move on to state machines until > the relevant portions of the OIM are as well defined as you can > make them at that point in time. Typically, though, such > iterative modifications occur because when doing the dynamic model > you discover something in the problem space that you overlooked > on the first path. > Yes, I've already noticed that. > (1) If there is ever a chance that you will have to port to another > platform, this is a huge benefit because the solution logic of the > OOA is invariant. This isolates changes and failures in the new > environment to the architectural code. > This is so obvious from the method I should have been able to formulate it... but this is particularly applicable for this project: it has to be cross-platform out of the box. > (2) The architecture allows substantial reuse on a single > platform. If it is done properly, every new > application/enchancement can use it as is. A nice idea, to be sure. I haven't yet really seen it used much, though. Probably because most of the projects I've worked on haven't really had much fomal design-for-reuse in them. > (3) The decoupling of logical function from implementation is > pretty much the same thing as decoupling two components (domains). > The systems will be more robust and maintainable if they are > decoupled through a data-only bridge. The separation of > design and implementation provides a similar encapsulation of > function. Absolutely agreed. > (4) By eliminating the implementation issues from the OOA one > obtains a much simpler representation of the system logic. This is > much easier to understand and it makes maintenance much easier. > We are able to isolate a logical problem to a single action or > synchronous service about 80% of the time just looking at the > models before going near the debugger or simulator. It also makes > it much easier to add enhancements. > Yes...in fact, most of the developers who have seen the models we have developed so far are absolutely thrilled at the clarity with which they understand the system. Tech support folks, too. > (5) You have to address the architectural issues anyway. One way > to view the architecture is that it makes life simpler. This is > really a combination of (2) and (3). What is an architecture, > really? It is a collection of tools that allow translation of the > models. If you are doing manual code generation, you would > most likely start with a collection of templates where the > programmer fills in the placeholders. The next step is to have > some text processing language scripts fill in some of the > placeholders for you. Then you add a few library routines for > doing things like event queue managers, navigating relationships, > etc. This is all stuff that developers might write themselves if > they weren't focused on getting the application-of-the-moment out > the door. They still do much of the same thing via cut&paste. It's so simple it's beautiful. Sigh. I'll never understand why people can't justify $50k on tools but spend $150k on an extra person's salary just to maintain the mess they built. Thank you ... this discussion has helped me to clarify my thoughts on why and how and benefits and such. --Chris Subject: Re: (SMU) Implicit bridges lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > > This seems to preclude things like handshaking protocols, synchronous > > data requests, or almost any type of significant service request. > > OOA provides both synchronous and asynchronous processes (accessors > and event generators). The argument is that you should never need > to make service requests. You model the behaviour of your subject > matter and worry about its implementation later. Any behaviour > that is visible in your subject matter is, in a sense, part of > that subject matter. So it should be modeled in the domain. Taking this to an extreme it seems to me that every application would have only one domain other than the architectural domains. In particular, I find your third sentence disconcerting. When I am talking about invoking services in other domains I am talking about services at the problem solution level, not the implementation of a particular domain. As a specific example, we have a domain that understands the semantics of a complete digital test (timing, patterns, voltage levels, etc.). We have a separate domain that understands the hardware at the register level. It does not understands tests at all but it is really good at generating individual hardware instructions for atomic operations at the test level (e.g., Read Pattern Pass/Fail). [We actually have a third at a level of abstraction between these two, but I'll keep it simple.] The domain that understands tests has to utilize the services of the lower level hardware interface domain to run a test on a particular set of hardware (we can run the same tests on different hardware). I do not see how it would be possible not to make such service requests unless one incorporated the hardware interface domain in the higher level domain to form a single domain. Moreover, those requests are asynchronous. Regardless of the the view of time in the client domain, some processing cannot be performed until the service indicates that it has completed (i.e., further state transitions need to be blocked until the event arrives). > You only need these wait states, etc. because you are modeling the > implementation. If you start introducing these things, then you > are fighting the OOA formalism, not leveraging it. I think we need those wait states, etc. because of the nature of asynchronous communications between domains. Sure, I could use a transform to poll the hardware (in practice, maybe even to peak at the event queue since transform bodies are outside the ken of the OOA) as an alternative. I happen to prefer the wait state because it is far clearer what is happening in the OOA. I could also modify the flow of control of the domain so that generating events is delayed until after the response event is received, thus forcing the domain to grind to a halt in the expected states (though that gets ugly quick). The issue is that it is crucial to the OOA to design it so that it can accommodate delays (i.e., block processing) in delivery of response events. I do not see how this has anything to do with the implementation. > > > action (new_x, new_y) > > > { > > > x := new_x; > > > y := new_y; > > > generate R1: Robot_Move_Complete(); > > > } > > It seems to me that you have overly restricted what can be simulated in > > single domain simulation. If the communications with the outside world are > > asynchronous, then one important thing to check in single domain simulation > > is the arrival of R1 at different points of the processing (i.e., by > > inserting it into the subsequent event queue in different positions). The > > explicit wormhole provides an excellent hook for a simulator to do this. > > But without it you have to build the bridge and hang a customized harness > > off it to test the domain properly -- something I thought you wanted to > > avoid. > > No, you don't need a test harness. The OOA meta model defines the > possible behaviour of the event. But it doesn't define the data packet returned, either synchronously or asynchronously. That has to be provided via some sort of harness. My point was that the wormhole paradigm provides a simple way to handle that for the simulator vendor so the analyst merely supplies the data packets. Without the wormhole you would have to do that yourself because the simulator would not understand the bridge. > Yes, if you want to ensure that your model will work on any > architecture, then you need to simulate with different delay > models. It should be possible to do a lot of checks using > static methods. Dynamic verification will probably use some > form of Monticarlo simulation. SES used to have quite a nice > simulator where you could define statistical delay models > for simulation. This is what I mean by your severely restricting things. As I understand it you would not be able to use such simulators because you have hard wired the events to be synchronous in the domain during single domain simulation (i.e., the 'Robot_Move_Complete' event is placed on the queue during the action when the 'generate Robot_Move_Complete' is executed in the simulation). This would preclude statistical delays in event processing or any other reordering of the event processing order. > No, No, No. The generation of the self-directed event does nothing > more than guarentee that no other events will be processed by the > state until the event is delivered. We can choose, at the system > level, to not deliver the event until the robot move is complete. > But this delay is a subset of the permitted behaviour under OOA. This seems to be source of my disconnect. There are a couple of things I don't understand here. First, I thought you said that in single domain simulation the event generation would just synchronously place the expected return event on the queue. Second, the expected return event in the asynchronous situation might not be directed at that state, instance, or even object requesting the service. Therefore the self directed event would have to be generated in the target instance simultaneously in some magical manner. Third, I just don't understand how this process by which the 'system' chooses not to deliver the event is different than an asynchronous wormhole, other than the fact that it is limited to self-directed events. Somehow you have to indicate in the OOA which events are really self directed (i.e., Now) vs. those that set up guarding (i.e., Later). > > Suppose R1 moves instance Ai from state A3 to state A4 and R1 is the only > > way to get there. Now suppose at some time in the future someone is adding > > an enhancement that requires Ai to move from A4 to A11. That developer > > might come along and add "generate R2..." right after the "generate R1..." > > in 'action'. [Let's not quibble about whether the event sequence is > > guaranteed; one can construct more complicated cases where it would be.] > > The model would always simulate correctly under your single domain approach > > but it would almost certainly be broken in the multi-domain case. In my > > view it is important to provide information that affects flow of control > > explicitly in the OOA so that this sort of error would > > be at least discouraged. With the implicit bridge it is actually > > encouraged. > > This change may break the bridge, but it doesn't break the domain. > You simply need to find a way to construct a new bridge that does > not break the time rules of OOA. My first difficulty is that there might not be an obvious problem with the bridge until it gets into the field. A lot of asynchronous systems behave pretty synchronously until some bizarre situation arises. I want some clue in the OOA that this might be an issue so I have a better chance of preventing the problem. Right now when I see a wormhole the warning lights go on. I would argue, though, that the domain might have to be modified in some other manner. As I argue above, the domain flow of control may have to deal with R1 being delayed. That flow of control might have to be modified in some way for the enhancement processing that R2 is part of. Again, I want some clear clue in the OOA that I need to worry about such things when making enhancements. > Where did I construct the SDFD? My statement was simply that if you > construct an incorrect model, or an incorrect bridge, then it is > incorrect. The method cannot be expected to infer correct behaviour > onto an incorrect model. An inference on my part. I assumed this was an extension of your thinking on the accessor dependencies. However, I don't see this as crucial because somebody has to determine when it is safe to do the Get. At some point the analyst has to at least colorize the OOA to indicate which Gets are related to which pairs of the outgoing and incoming events. I would think that an SDFD whose guards were released by the bridge would be a fairly straight forward and general way to define that in the analysis while keeping the bridges fairly simple. > Again, we don't need SDFDs for the implicit bridge (though they might > exist in the server). If you have wait states, etc., then your model is > probably polluted. If you simply assume that external activities are > complete by the time that you need their results, then things become > simple. As long as you can define the expected result in the single- > domain case, then this assumption is feasible. My whole discussion on > the meaning of "invarient behaviour over sequentially constrained paths" > was intented to cover the need for parallel activities during the > activity of an implicitly bridged process. > > My conclusion is that you need to detect and eliminate all unsynchronised > accesses to attributes. If your model is clean, then implicit bridging > becomes simpler. > > There are still some cases where implicit briding is impossible. Consider > a synchronous random number generator. There are no constructs in OOA > that allow you to have a process whose behaviour is not completely > predictable based on its inputs (plus the state of the domain). In > this case, a synchronous wormhole needs to be defined as a transform > with side-effects. But these situations are relatively rare. I cannot > find any places in the ODMS domain where explicit bridging is required. You may be able to eliminate the wait states, but it seems to me the analyst needs to replace them with another mechanism that has to be at least as 'polluted'. I agree you can use some other means of guarding the Gets than SDFDs, so the specific mechanism is an implementation issue. My argument is that the analysis issue is the fact that they must be guarded, so that needs to be explicitly indicated in the OOA. And the wormhole paradigm seems to do that quite abstractly. To the extent that you are proposing a more specific definition (i.e., one that identifies _exactly which_ Gets need to be guarded rather than just the interval of guarding), it seems to me you would need something more in the OOA that explicitly provides that connection rather than hiding it in the bridge implementation. I see identifying what needs to be guarded under what circumstances as an analysis issue; the only implementation issue is what mechanism is used to perform the guarding. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: RE: (SMU) Why Should I? David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- > So...to my real question. I have modeled some basic states > for things like Transaction and Batch. Now the traditional > implementation would be to just stick everything in a > database and run a process over the database on a regular > schedule to do all the posting and such. The question is why > should I bother implementing an architecture and so forth, > with objects sending messages to each other, and all that? > Isn't that just too much added complexity for such an > essentially simple process? What does it gain me? If the modeling makes something more complex than it is, then you're probably doing something wrong. One thing I see when I look at your description ofthe transaction domain is that it contains "customer", which seems like probably domain pollution. This implies that you're trying to couple your application and transaction domains. I'd need to see the problem in more detail to work out what's really happening. It feels to me that you might want to move some of the transaction behaviour into the applicatin domain, and treat the remainder as an architectural domain. If the application's view of the transation is delimited by events, then these events can be used to open/close transactions in the architecture. the 'close' event (which wouldn't be named 'close') would not be delivered until the transaction is posted. As for implementing it in a database, my reaction would be that, if its a good solution, then you should use it. The fact that you're analysing the problem in SM should not lead you to create your own DBMS!. Use your database as an architectural service and use the code generator to write the interface. The architecture should keep track of which data is where. > *I* think that the primary advantages are separation of > application from implementation, as well as formalism. > The possibility of code generation is an added bonus, > but an expensive (in tool costs) one. Can anyone > enlighten me further? It shouldn't be expensive. The separation is good, but only you can leverage it. Otherwise its basically documentation, which will soon gather dust on a shelf. Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: Re: RE: (SMU) Why Should I? Chris Curtis writes to shlaer-mellor-users: -------------------------------------------------------------------- > "Peter J. Fontana" wrote: --------------- > I'm not sure I get the gist of the question - let me see if I can > "repeat" it: > > You do not see why you should implement your system as a set of > Finite State Machines. > Yes: this is at least a major issue. > Assuming I'm somewhat on the mark - let me cut at this at two > levels: > a) perhaps you assume that a Shlaer-Mellor implementation has a > separate O/S thread of control for each active object No, I hadn't made that assumption. It would be desirable, I think, for this system, but I hadn't assumed that S-M implies it. > b) perhaps you see the most straightforward implementation for this > domain is fundamentally synchronous (function call->function call, > as opposed to FSM->event->FSM) > Yes. This is especially true (to me at this point) where there may be 30 million instances of a Transaction object each month. > Regarding b), the OOA-97 paper (from Kennedy-Carter, www.kc.com) > defines the notion of analysis-explicit domain and object based > services (like methods), which are a straightforward means of > expressing state-independent behavior through synchronous > "function" calls. The current use of UML in the OOA-RD world > further reinforces these services. > I'm not familiar with this document yet, but I got it and am reading it now. > Here at Pathfinder, we work with a lot of new projects, and finding > the right balance between state models and synchronous services is > difficult for some. Our quick tip here is to understand why an > "object lifecycle" (per the Shlaer-Mellor book) is not just any old > state model, and model the state-dependent behavior in object > lifecycles. I see how this is a balancing act. I'm not too clear myself on when a synchronous service is appropriate vs. state modeling, but I'll do more reading to see what falls out. What I'm dealing with is a domain expert who is of the firm belief that the correct (i.e. only) way to do this is to create a transaction table and a batch table in a database, with referential integrity triggers and such. You add and delete transactions via SQL. Some process post_pending_transactions() runs periodically and roots through the database doing its stuff. Let me see if I understand this in terms of OOA/RD correctly, though: technically, all that is Architecture that the Transaction domain uses. Other domains shouldn't know or care that it's implemented as a standard database with triggers and scheduled batch processes. Am I getting close to the right idea? --Chris Subject: RE: (SMU) Why Should I? "Peter J. Fontana" writes to shlaer-mellor-users: -------------------------------------------------------------------- > -----Original Message----- > From: Chris Curtis [mailto:chris@satel.com] > Sent: Wednesday, October 13, 1999 4:42 PM > Let me see if I understand this in terms of OOA/RD correctly, though: > technically, all that is Architecture that the Transaction domain > uses. Other domains shouldn't know or care that it's implemented as > a standard database with triggers and scheduled batch processes. > Am I getting close to the right idea? Yes. But a note of caution: it can become unwieldy to try and force OOA semantics into an existing "architecture" that wasn't constructed with OOA semantics in mind. Assuming that the execution primitives you have are appropriate, the next pitfall is making sure your architecture stays free of the details of your analysis, and vice-versa. Subject: Re: RE: (SMU) Why Should I? Chris Curtis writes to shlaer-mellor-users: -------------------------------------------------------------------- > If the modeling makes something more complex than it is, then > you're probably doing something wrong. > > One thing I see when I look at your description ofthe transaction > domain is that it contains "customer", which seems like probably > domain pollution. This implies that you're trying to couple your > application and transaction domains. > Yes, this actually is becoming apparent to me, too. In fact, the more I think about it the domain is a bit wrong. The description of a Transaction from the domain experts is "some [external, human-initiated] action that modifies a customer's account balance". Thinking at the keyboard here, it kinda looks like the real service domain here should be Accounts, of which the batch/transaction behaviour is a part. Does this make sense? > I'd need to see the problem in more detail to work out what's > really happening. It feels to me that you might want to move > some of the transaction behaviour into the applicatin domain, > and treat the remainder as an architectural domain. If the > application's view of the transation is delimited by events, > then these events can be used to open/close transactions in > the architecture. the 'close' event (which wouldn't be named > 'close') would not be delivered until the transaction is > posted. > Why wouldn't the 'close' event be named 'close'? You lost me there. What happens is [for example] a customer service rep issues a credit to a customer's account. A transaction is created with the amount of the credit, the fact that it is a credit, and which customer it is associated with. Transactions are [always] grouped into batches. At some point, a supervisor can "close" a batch, which means the transactions can only be deleted or modified by a supervisor. On a periodic basis, the "closed" batches are "posted", which means that each individual transaction is applied to the customer account balance. So eventually, the example credit transaction will be posted, and the customer's account updated with the new balance. The transaction is now considered posted and may not be deleted or modified at this point. > As for implementing it in a database, my reaction would be > that, if its a good solution, then you should use it. The > fact that you're analysing the problem in SM should not > lead you to create your own DBMS!. Use your database as an > architectural service and use the code generator to write > the interface. The architecture should keep track of which > data is where. > I'm a bit unclear on how a code generator can write the interface to the database, but that may be a tangential consideration. (Would this be in my archetypes? Or would it be a bridge to the "database service" domain? I'm very new to translation, obviously.) > It shouldn't be expensive. The separation is good, but only > you can leverage it. Otherwise its basically documentation, > which will soon gather dust on a shelf. > Perhaps it shouldn't be, but it definitely seems to be expensive. $50k for three seats to a tool is pretty expensive. --Chris Subject: RE: (SMU) Why Should I? "Lynch, Chris D. SDX" writes to shlaer-mellor-users: -------------------------------------------------------------------- > Chris Curtis writes to shlaer-mellor-users: > -------------------------------------------------------------------- >So...to my real question. I have modeled some basic states for things like >Transaction and Batch. Now the traditional implementation would be to just >stick everything in a database and run a process over the database on a >regular schedule to do all the posting and such. The question is why >should I bother implementing an architecture and so forth, with objects >sending messages to each other, and all that? Isn't that just too much >added complexity for such an essentially simple process? What does it gain me? >I should add that this is pretty much the ongoing battle with some >old-school folks. In addition to the other nice answers you already received, I would add the following, as a graduate of "old-school": The traditional system often ends up making _multiple_ passes over the database, with different programs applying various types of transactions. A lot of logic (e.g. automatic overdraft protection) ends up spread out over multiple programs (messy) and even ends up duplicated (dangerous.) The object approach encapsulates all that in one place, and provides an obvious path from model to implementation--your best guarantee that you haven't mucked things up in the implementation. Regards, Chris ---------------------------------- Chris Lynch Abbott AIS San Diego, CA LYNCHCD@HPD.ABBOTT.COM ---------------------------------- Subject: Re: (SMU) Implicit bridges lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... Wow, we really seem to be talking past each other here. > > To eliminate these (i.e., to get rid of this manifestation of > > coupling in the OOA) with the implicit bridge mechanism you > > would have to somehow include state transitions in addition > > to data flows in your SDFD. This can get really nasty, I > > think (e.g., dealing with self-directed events that only > > temporarily move away from the acceptance state). > > I don't really see what you're getting at here. It is probably not relevant, given your last point below, but I was under the impression initially that you regarded things like wait states, polling transforms, etc. as manifestations of coupling that should be removed from domains. Given that understanding I was arguing against the notion that all you need to do is block any Gets that might be affected by the service. I argue that the flow of control of the OOA itself may be affected so that whatever mechanism is used to define what the implicit bridge needs to do that blocking will also need to block other events from being processed, even those not directed at the instances the generate the requests or field the response event. > When you use an > implicit bridge, the client domain doesn't need an SDFD. And > you wouldn't put this stuff in the server's. I think the client domain needs something to define the dependencies if your approach is used. It may not be an SDFD, but there has to be some descriptive mechanism in the meta model. > Just accept that > the OOA time rules tell you that the event can be delivered at > a variety of times, and handle this in the state models. This one really blows my mind. This is exactly what I have been saying in the other messages of this thread -- the structure of the OOA state models needs to account for such delays. Whatever such structure is put in place is a manifestation, however mild, of the coupling between domains. What I thought you were proposing was a mechanism by which that structure could be *eliminated* using the implicit bridge mechanism. That is, the necessary guarding to ensure the proper domain state for receiving the delayed event could be handled by a guarding mechanism in the underlying bridge so that it was not visible in the OOA. This paragraph suggests that this is not what you had in mind. If you still expect the OOA to deal with the dependencies for the simultaneous view of time, then what is special about the implicit bridge? If you solve the problem in the state machines, what do you need the implicit bridge for? At another level, we avoid the simultaneous view of time like the plague because it is nontrivial to do. It also leads to your 'hideously complex' state models. In fact, if you regard wait states as domain pollution, I would think you would want to avoid the simultaneous view whenever possible. Unless we have a different definition of what a 'wait state' is, I can't imagine dealing with simultaneous time for all events in a domain without them (or some similarly obnoxious mechanism such as a plethora of flags). I think I would much prefer to have an additional SDFD-like descriptive mechanism to define the dependencies (in the OOA) and then let the architecture figure out the blocking scheme. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Need help conceptualizing lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > > I spent this morning looking at your code (thanks again) and was > > wondering: is ListItem and ListItemLink the actual bridge? > > Nice question, but I'd say no. I'll elaborate below. I would say yes. In fact, you entire approach strikes me as being very much an architectural approach. These objects certainly have no place in the application domain's OOA and it is moot whether they are necessary in the GUI domain's OOA -- they seem to exist solely to provide an implementation solution. > You will see that the appication domain has a few pointers into > the ui domain (the "counterparrt" members). It is these pointers > that are the projection of the bridge into implementation. The > bridge itself is not visible as an independent entity. Exactly. In effect you are using the namespace and counterpart pointer mechanisms to implement the internals of the application's external interface. Then ListItem and ListItemLink become the implementation of the GUI's external interface implementation to deal with them. The bridge code itself has been elegantly eliminated via inheritance but the cost is added complexity in the domain interfaces that can lead to... > An interesting point occurs when you look at the interface that > the application domain actually uses when talking to the UI. I > could create interfaces (base classes) named Object and > Relationship and use these instead of ListItem and ListItemLink > within the application code. > > I can then define, in the UI domain,: > > class ListItem : public SM_Object { ... }; > class ListItemLink : public SM_Relationship { ... }; > //... > class SM_Relationship : public SM_Object {...} > > We would then refactor the UI classes to move the navigation > and instances collections into the base classes. This strikes me a serious domain pollution if ListItem and ListItemLink are GUI OOA objects rather than bridge objects. Not only does GUI now know that the application has particular objects (i.e., there has to be something on the other end of the counterpart), it also knows what relationships exit between them in the application domain. I don't think the GUI domain has any business knowing anything about object relationships in the application domain. Even if they are intended to simply be bridge objects in the GUI interface I do not see the advantage of this inheritance complexity. I bothers me because it moves too far afield from the simple message passing paradigm of bridges and seems to invite endless tinkering with the architectural elements as new applications with different service models are developed. Complex inheritance trees tend to be both fragile and difficult to refactor when things change moderately. I would prefer to stick with simple data packets and event identifiers in the architecture of the domain interfaces so that they remain the same (e.g., the entire domain can be packaged in a DLL that doesn't change when ported) while I write some glue code for the bridge with each port. > This enables us to use the UI domain as the architecture. The prosecution rests. B-) -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: RE: (SMU) Need help conceptualizing David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman writes to shlaer-mellor-users: > Responding to Whipp... > > > > I spent this morning looking at your code (thanks again) and was > > > wondering: is ListItem and ListItemLink the actual bridge? > > > > Nice question, but I'd say no. I'll elaborate below. > > I would say yes. In fact, you entire approach strikes me as > being very > much an architectural approach. These objects certainly have no place > in the application domain's OOA and it is moot whether they are > necessary in the GUI domain's OOA -- they seem to exist solely to > provide an implementation solution. They are obviously nothing to do with the application. I think they are UI things. When you looked at the problem, you saw "dog_screen" and "owner_screen". More recently, you pointed out that these were not in the original problem description. Your actual words were: "Ah, but the problem as I read it stated it was that the user wants to see the two lists on a single screen". If the user wants to see two lists, then we can have a list object. As may code showed, I can replace the underlying windowing system with an iostream, so the objects do have meaning outside the domain ofthe windowing system. I'll admit that the names of the objects could be improved. > > An interesting point occurs when you look at the interface that > > the application domain actually uses when talking to the UI. I > > could create interfaces (base classes) named Object and > > Relationship and use these instead of ListItem and ListItemLink > > within the application code. > > > > ... > > This strikes me a serious domain pollution ... How can we have domain pollution in an implementation??? The whole point of RD is to merge the subject matter together in a way that meets the performance (etc) requirements of the system. RD does not need to preserve the structure of the model, only the behaviour. > Even if they are intended to simply be bridge objects in the GUI > interface I do not see the advantage of this inheritance > complexity. The code I presented was close to the simplest possible mapping of the model into c++ (not necessarly the same thing as the simplest c++ that can implement the problem). Any improvements (i.e. optimisations) to it are likely to be more complex, at least initially. (Actually, the implementation of the (M) derivation could be simplified -- I made a few inline optimisations which could be undone). I find that RD development follows a rain-drop shape. you start with something really simple. Then you evolve it into something more complex. Then, as you begin to see where you're going, the whole things snaps back to simplicity. > It bothers me because it moves too far afield from the simple > message passing paradigm of bridges and seems to invite endless > tinkering with the architectural elements as new applications > with different service models are developed. When exploring architectures, speculative tinkering can be a good thing. > Complex inheritance trees tend to be both fragile > and difficult to refactor when things change moderately. The tinkering is done at a meta-level, so this refactoring is not a problem. > I would > prefer to stick with simple data packets and event identifiers in the > architecture of the domain interfaces so that they remain the same > (e.g., the entire domain can be packaged in a DLL that doesn't change > when ported) while I write some glue code for the bridge with > each port. The domain is a unit of conceptual cohesion. It is meaningless to package it as a DLL, because the domain has no implementation until it's combined with other domains (notably, the architecture). The opposite extreme to this horizontal distribution model is the vertical model: you take each object in the application domain, and trace its counterpart network through the system. Then you package each of those trees as a DLL (whose primary interface is derived from the application-domain view). Both extremes are non-optimal (and impossible!). Good code generation is just too tangled! Dave. Subject: RE: (SMU) Why Should I? Chris Curtis writes to shlaer-mellor-users: -------------------------------------------------------------------- >In addition to the other nice answers you already received, I would add the >following, as a graduate of "old-school": The traditional system often ends >up making _multiple_ passes over the database, with different programs applying >various types of transactions. A lot of logic (e.g. automatic overdraft >protection) ends up spread out over multiple programs (messy) and even ends up >duplicated (dangerous.) The object approach encapsulates all that in one place, and >provides an obvious path from model to implementation--your best guarantee that you >haven't mucked things up in the implementation. Thank you. This seems to be a particularly clear way of expressing the advantages of this approach. Especially helpful to see what this buys over the old batch & database world. An odd thought occurs to me... the Batch collection is really an implementation artifact of how the old systems had to deal with Transactions. On the other hand, it has acquired (I think) a semantic association with the concept of a Supervisor approving Transactions in groups. Apropos of nothing, really... just thinking at the keyboard. ------------------------- Chris Curtis Systems Engineer "Where am I, and what am I doing in this handbasket?" Subject: RE: (SMU) Implicit bridges David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman responded to me: > > Any behaviour > > that is visible in your subject matter is, in a sense, part of > > that subject matter. So it should be modeled in the domain. > > Taking this to an extreme it seems to me that every > application would have only > one domain other than the architectural domains. That extreme position is almost correct. Every subject matter only has one domain (by definition). Every domain is a complete description of its subject matter (also by definition). If one regards the application as a subject matter (an assumtion that is open to debate) then its description must, therefore, be complete within one domain. The part that you are ignoring is that a system consists of many subject matter. From the perspective of *any* domain within the system, none of the other subject matter alter the 'laws of physics' of that domain. They are all architectural. Taken to extreme, we could say that the application is part of the architecture on which the architecture runs!! > > You only need these wait states, etc. because you are modeling the > > implementation. If you start introducing these things, then you > > are fighting the OOA formalism, not leveraging it. > > I think we need those wait states, etc. because of the nature > of asynchronous communications between domains. Sure, I could > use a transform to poll the hardware ... Yes, I think we were talking about slightly different things. The wait states I was abolishing are those introduced by modelling the explicit use of the interface. Once you reach pure lifecycels, there will, indeed, be states that are waiting for events. > > No, you don't need a test harness. The OOA meta model > > defines the possible behaviour of the event. > > But it doesn't define the data packet returned, either > synchronously or asynchronously. That has to be provided > via some sort of harness. My point was that the wormhole > paradigm provides a simple way to handle that for the > simulator vendor so the analyst merely supplies the data > packets. Without the wormhole you would have to do that > yourself because the simulator would not understand the > bridge. If you want to supply specific values, to check specific test cases, then you will still to build a harness. If, however, you simply want to do boundary/domain/etc testing then you should be able to use a generic test fixture. > This is what I mean by your severely restricting things. As > I understand it you would not be able to use such simulators > because you have hard wired the events to be synchronous in > the domain during single domain simulation (i.e., the > 'Robot_Move_Complete' event is placed on the queue during the > action when the 'generate Robot_Move_Complete' is executed > in the simulation). This would preclude statistical delays > in event processing or any other reordering of the event > processing order. I think there's been a complete failure of communication. I am not suggesting that you force events to be synchronous. In fact, I'm suggesting the exact opposite: that much of the behaviour that is assumed to be synchronous can actually asynchronous. For example, a set-accessor does not need to complete within the scope of its enclosing actions, provided it is complete before anyone needs to use it ... and, then, only if there is a sequential constraint between the getter and the setter. Once you can extend the set accessors; and you realise that when OOA says that actions take time, they really mean it; then the ability to implicity bridge these elements is enhanced. The downside is that you've got to start thinking in simultaneous time, but that's not really a bad thing. > First, I thought you said that in single domain simulation the > event generation would just synchronously place the expected > return event on the queue. It may place it on a queue, but remember that every instanace pair has its own queue, and each queue may have independent, and variable, delays. So the fact that the single-domain simulator places the event on a queue doesn't do much except guarentee that either it'll be delivered (sometime, according to the time rules of OOA) or an exception is generated. > Second, the expected return event in the asynchronous > situation might not be directed at that state, instance, > or even object requesting the service. Therefore the > self directed event would have to be generated in the > target instance simultaneously in some magical manner. An event is always directed at its intended recipient (even creation events). If you don't want it to be self-directed, then don't self-direct it. > Third, I just don't understand how this process by which > the 'system' chooses not to deliver the event is different > than an asynchronous wormhole, other than the fact that it > is limited to self-directed events. Somehow you have to > indicate in the OOA which events are really self directed > (i.e., Now) vs. those that set up guarding (i.e., Later). The mechanism is in no way restricted to self directed events. Its not even restricted to events. *Any* element in the OOA meta model can be implicitly bridged. And, from the system perspective, you can still have wormholes. Its just that the client domain doesn't know about them. In the ideal case, the server doesn't know about them either. The whole issue is about how to set up the bridges in a way that eliminates coupling from the domains. The only way to do this is to say that the only external behaviour seen by a domain is the OOA meta model. > My first difficulty is that there might not be an obvious > problem with the bridge > until it gets into the field. A lot of asynchronous systems > behave pretty > synchronously until some bizarre situation arises. I want > some clue in the OOA > that this might be an issue so I have a better chance of > preventing the problem. Start off from an OOA model in simultaneous time. If it works in that sitution, then it should work in any parallel situation. Of course, if you haven't used static proofs, then you may not have tested the appropriate interaction. But once you find the bug, you'll be able to reproduce it in the single-domain simulation. Dave. Subject: Re: (SMU) Why Should I? lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Curtis... > > I worry about this. I think the OIM should be completed before > > launching into state modeling. > > I should probably back up here and state that at this point I am > trying to take an example portion of the system all the way through > the OOA/RD process as a demonstration. So far the entire system has > had around 2 days of analysis....so ordinarily I would not be at this > point yet. Tch, tch, tch. Just because you are in a hurry for a one-shot is not an excuse for not doing it right. B-) > > (2) The architecture allows substantial reuse on a single > > platform. If it is done properly, every new > > application/enchancement can use it as is. > > A nice idea, to be sure. I haven't yet really seen it used much, > though. Probably because most of the projects I've worked on haven't > really had much fomal design-for-reuse in them. I assume that you are doing manual code generation. If so, then a viable tack in persuading people is that providing an architecture is nothing more than adding a set of tools to make code implementation easier. The OOA already isolates you from the implementation because the allowed notation is thoroughly abstract, so there is no major mind-bending required. [One can still sneak some implementation into the OOA, but if someone experienced is doing reviews that shouldn't be a big problem.] The first thing you need is a suite of templates with placeholders for the coding. For your transaction domain these templates might have the boilerplate for opening DB transactions, etc. built into them. This makes for a nice example of the benefits of the approach because it is obvious that coding time for the DB boilerplate is saved. If you need a queue manager, then such templates also handle the identifier conventions and the infrastructure for efficiently registering and finding instances. This should demonstrate that it really isn't a lot of trouble to code an object's state machines because the infrastructure is hard-wired. [Since it is not tough to build a queue manager, you might want to do that even if you could do a synchronous architecture -- just to show the value of the hard-wiring in the templates.] I would be tempted to do the first project just at this level of architecture. This would demonstrate that converting the models to code is not tough even with a very rudimentary architecture. Once people start using the templates they will get sucked into making improvements by tweaking the templates. Next they will start building a library of implementation classes for things like smart pointers. Then they will move on to perl scripts. Eventually your problem will not be getting people on board -- it will be maintaining source control on the architecture. You might also want to emphasize to people that when you start with a simple architecture, it is not very different than normal coding from any set of models. Also, the architecture can be improved incrementally over time to add tools and continually make life easier for implementation. You might use Whipp's example code from the other thread. It, together with his mail message, illustrates incremental improvement to make things more generic. While I wasn't enamored of that context, it was a nice example of the sorts of things that you can do to make an architecture more robust and reusable. > Yes...in fact, most of the developers who have seen the models we > have developed so far are absolutely thrilled at the clarity with > which they understand the system. Tech support folks, too. In fairness, they are only seeing half of the picture. The other half is the architecture itself, which can be more complicated than the application -- when you get into more sophisticated architectures they will probably have their own OOA. The architecture also needs to be documented in other ways to promote reuse, etc. However, splitting those pictures does make understanding the system a lot easier. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Why Should I? lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Curtis... > Yes. This is especially true (to me at this point) where there may be > 30 million instances of a Transaction object each month. I assume that these have to be stored persistently between the major processing points for a batch (close, post). With that volume I would further assume that you are going to use somebody else's DB to store them to take advantage of their economies of scale in optimizing. This suggests to me that you have two relevant domains. The first is an architectural domain that is the DB server engine (possibly handling multiple DBs). The second is your Transaction domain. This further suggests that the Transaction domain has rather limited scope in the scheme of things. I agree with Whipp that I would be very suspicious of Accounts here. I would see it more as an agent for communicating with the DB and keeping track of things like Batches. The contents of an individual transaction, other than possibly a transaction type, might be totally irrelevant to it -- it is just passed through to the DB as a flock of bits. [I could even imagine situations (fixed number of transactions per batch, of the same type, etc.) where a transaction might be an attribute of Batch in this domain.] However, I am not convinced that 30 M transactions is so many that you need to worry about the overhead of event processing. That averages to less than 15 K/min. If you take care with the queue manager, the overhead will usually be only a dozen machine instructions or so per event processed. That should not be significant relative at that level of processing. > What I'm dealing with is a domain expert who is of the firm belief that the correct (i.e. only) way to do this is to create a transaction table and a batch table in a database, with referential integrity triggers and such. You add and delete transactions via SQL. Some process post_pending_transactions() runs periodically and roots through the database doing its stuff. If you are going to use SQL, you definitely don't have to worry about queue manager overhead. B-) The tables are probably fine, but I would try to make a case for talking to the DB engine in its native mode. What you describe does not seem to be very complicated (i.e., there aren't any ad hoc queries, joins, etc.), so it should be possible to map the events or accessors directed to/from the database engine domain into native server calls with very little difficulty. > Let me see if I understand this in terms of OOA/RD correctly, though: > technically, all that is Architecture that the Transaction domain > uses. Other domains shouldn't know or care that it's implemented as > a standard database with triggers and scheduled batch processes. Am I getting close to the right idea? It is *mostly* architecture. I think that in the Transaction Domain you have to deal with the notion of closing and posting batches at the application level, but I would be surprised if anything except the Batch object is active and its actions should be pretty small. If you think of this domain as I described it above, it really doesn't do a lot. Most of the real work will be done in accessors that directly map into DB engine calls (i.e., it is entirely possible that you can replace the accessor with a native DB API call). These forums are great, aren't they? People who have read two Emails can design your whole application for you. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Why Should I? lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Curtis... > Yes, this actually is becoming apparent to me, too. In fact, the more I think about it the domain is a bit wrong. The description of a Transaction from the domain experts is "some [external, human-initiated] action that modifies a customer's account balance". Thinking at the keyboard here, it kinda looks like the real service domain here should be Accounts, of which the batch/transaction behaviour is a part. Does this make sense? I am with Whipp here -- Accounts don't go in this domain. If you are using a commercial DB for persistence, there is going to be a way to run a 'canned' procedure or query to execute update transactions against the DB, probably a trigger for a particular transaction type. That trigger will handle the actual update of customer accounts. That trigger is part of your architecture and it has to be defined to the DB engine. Once it is there, though, the idea of "changing account balance" takes on a very different meaning in your application. It now means "run transaction X against the DB". Now the most your Transaction domain needs to know about this processing is the transaction type, and it may not even have to know that. The domain doesn't have to know where customer accounts are in the DB because that is handled by the trigger. What it does need to know about are Batches and the Transactions that are associated with them. Thus the mission, subject matter, responsibilities, and level of abstraction of this domain become very narrow and very simple. > > I'd need to see the problem in more detail to work out what's > > really happening. It feels to me that you might want to move > > some of the transaction behaviour into the applicatin domain, > > and treat the remainder as an architectural domain. If the > > application's view of the transation is delimited by events, > > then these events can be used to open/close transactions in > > the architecture. the 'close' event (which wouldn't be named > > 'close') would not be delivered until the transaction is > > posted. > > > > Why wouldn't the 'close' event be named 'close'? You lost me there. Whipp's spin may be somewhat different, but mine is that events should be announcements rather than commands. That is, the paradigm is "I'm Done" rather than "Do That". State machines should be built to describe the behavior of the object without regard to context. Hence a state machine should execute its action and then announce to the world that it has completed that action. It is up to the analyst to determine who should be interested in that announcement and direct the event accordingly within the context of the domain. This happens to be a hot button for me because I think it is a very important philosophical issue that distinguishes S-M from other methodologies. Because the OOPLs all took the easy way out and mapped messages into method invocations, a generation of developers has lost track of the difference between 'message' and 'method' and grown up thinking about object communications in terms of "Do That". We see that now in the dominance of responsibility methods where objects are defined as encapsulations of behavior and the data that supports it rather than the original definition of encapsulations of data and the operations upon it. Thus it is no surprise that the responsibility approaches are treading very close to coming full circle to the traditional functional decomposition of Structured Programming. But I digress... > I'm a bit unclear on how a code generator can write the interface to > the database, but that may be a tangential consideration. (Would this > be in my archetypes? Or would it be a bridge to the "database > service" domain? I'm very new to translation, obviously.) Whipp will undoubtedly come up with an elegant way to completely eliminate the Transaction domain altogether with architectural magic. In the meantime, let me propose a more prosaic approach. As I envision this domain it is not very complicated. It needs to keep track of the status of Batches and periodically run a bunch of associated Transactions against the DB. Depending on how long it is between closing and posting, it probably has to save Transactions and Batches in the DB, at least temporarily. Given my understanding thus far, this is basically just reading/writing records to DB tables and, occasionally, running a particular transaction type to update the customer accounts. Let's assume the DB provides a native API with functions like Read, Write, and Execute. The arguments might be something like Record Type, Key, and Buffer Address. There are lots of ways to map these into the domain code, but I'll do two of them here. The first way is to generate events to a domain that represents the database engine. The translation of the events is easy -- the event generator is simply replace the DB API calls Read, Write, or Execute. A intermediary bridge routine might be required to remap the data. The Read would, of course, be a synchronous call and you would post the returned data to the relevant object. The second way is to use accessors or transforms. This is exactly the same idea where the action that, say, saves a Batch instance invokes an accessor or transform. That accessor or transform is replaced in the translation in the same way as the event generator process above. In both cases the implementation of the event generator or accessor/transform is part of a bridge. That bridge connects the Transaction domain to the DB engine domain (which is actually a third party client API DLL linked into the application). > Perhaps it shouldn't be, but it definitely seems to be expensive. $50k for three seats to a tool is pretty expensive. That price has to include a simulator/code generator. It's pretty much a bargain. Simulation is a boon to development through being able to find logical errors early in the design process. The simulator provides a much higher level of abstraction that is much easier to deal with than language IDE debuggers. It has the beneficial side effect of forcing you to write test use cases early. Code generation may or may not be beneficial. The problem is performance. Often the code generators write poorly optimized code that is unacceptably slow. That is getting better now that the CASE vendors have gotten past the problem of generating correct code and providing supporting tools so that they can work on their architectures. If the code is fast enough for you (we only manually code low level device driver domains) then automatic code generation is a major win. Code generation takes tens of minutes vs. 5-10% of the development cycle and you don't have to worry about debugging typos. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Need help conceptualizing lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > > > > > I spent this morning looking at your code (thanks again) and was > > > > wondering: is ListItem and ListItemLink the actual bridge? > > > They are obviously nothing to do with the application. I think they > are UI things. > > > > An interesting point occurs when you look at the interface that > > > the application domain actually uses when talking to the UI. I > > > could create interfaces (base classes) named Object and > > > Relationship and use these instead of ListItem and ListItemLink > > > within the application code. > > > > > > ... > > > > This strikes me a serious domain pollution ... > > How can we have domain pollution in an implementation??? If ListItem and ListItemLink are domain objects, then the fact that they inherit from an object shared across domains is domain pollution. Let's say the application is modified or the GUI is ported so that the relationship between Dog and Dog Owner is now managed by an associative object i nthe application. Then ListItemLink is broken. Your argument is that the inheritance is in the implementation (in particular, it is a bridge implementation) so it can be changed when the domain is re-translated. My problem with that is that the domain translation is context dependent -- the translation has to change when another domain's internals are changed. This is why I prefer writing some context-dependent bridge code that sits between two domain interfaces that are invariant with context. > > Even if they are intended to simply be bridge objects in the GUI > > interface I do not see the advantage of this inheritance > > complexity. > > The code I presented was close to the simplest possible > mapping of the model into c++ (not necessarly the same thing > as the simplest c++ that can implement the problem). It is elegant and terse, but not simple. Big difference and one of the reasons large projects in C++ have problems. It is academic if you are doing auto code generation, but if you are maintaining a manually generated system it can be a headache. > > Complex inheritance trees tend to be both fragile > > and difficult to refactor when things change moderately. > > The tinkering is done at a meta-level, so this refactoring > is not a problem. I am not sure what you mean by 'meta-model' here. That might be true of the revision you suggested, involving MS-Object, etc., but the original code snippets struck me as being application specific. Even at the MS-Object level I suspect it would be a pretty hefty meta-model. B-) > > I would > > prefer to stick with simple data packets and event identifiers in the > > architecture of the domain interfaces so that they remain the same > > (e.g., the entire domain can be packaged in a DLL that doesn't change > > when ported) while I write some glue code for the bridge with > > each port. > > The domain is a unit of conceptual cohesion. It is meaningless to > package it as a DLL, because the domain has no implementation > until it's combined with other domains (notably, the architecture). Say, what?!? We do this all the time. It is absolutely essential in a plug & play environment where components are being plopped in and out of dozens of products. It also cuts down build time when enhancing only a part of a large system. When we translate a domain it is _always_ to a DLL and we _always_ translate them independently. [Well, almost always -- some of the older code packaged 2-3 domains in a single DLL, but we don't do that anymore.] -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Implicit bridges lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > > Taking this to an extreme it seems to me that every > > application would have only > > one domain other than the architectural domains. > > That extreme position is almost correct. Every subject matter > only has one domain (by definition). Every domain is a complete > description of its subject matter (also by definition). If one > regards the application as a subject matter (an assumtion that > is open to debate) then its description must, therefore, be > complete within one domain. > > The part that you are ignoring is that a system consists of > many subject matter. From the perspective of *any* domain > within the system, none of the other subject matter alter > the 'laws of physics' of that domain. They are all > architectural. > > Taken to extreme, we could say that the application is part > of the architecture on which the architecture runs!! Holy Translation, Batman! I knew you were into RD, but this is seriously upwind of Extremist. Put one Application domain at the top of the DC that has one Doit object and let's generate some software! If it doesn't turn out to be exactly what the customer wanted, let's tweak those translation rules and party again! > > But it doesn't define the data packet returned, either > > synchronously or asynchronously. That has to be provided > > via some sort of harness. My point was that the wormhole > > paradigm provides a simple way to handle that for the > > simulator vendor so the analyst merely supplies the data > > packets. Without the wormhole you would have to do that > > yourself because the simulator would not understand the > > bridge. > > If you want to supply specific values, to check specific > test cases, then you will still to build a harness. If, > however, you simply want to do boundary/domain/etc > testing then you should be able to use a generic test > fixture. I do not understand what you mean by 'boundary/domain/etc testing'. It seems to me that one always needs to define the data in incoming event data packets, whether they are stimuli events or response events. The wormhole allows the simulator vendor to provide the same 'harness' for both. With the implicit bridge this is not true. > > This is what I mean by your severely restricting things. As > > I understand it you would not be able to use such simulators > > because you have hard wired the events to be synchronous in > > the domain during single domain simulation (i.e., the > > 'Robot_Move_Complete' event is placed on the queue during the > > action when the 'generate Robot_Move_Complete' is executed > > in the simulation). This would preclude statistical delays > > in event processing or any other reordering of the event > > processing order. > > > First, I thought you said that in single domain simulation the > > event generation would just synchronously place the expected > > return event on the queue. > > It may place it on a queue, but remember that every instanace > pair has its own queue, and each queue may have independent, > and variable, delays. So the fact that the single-domain > simulator places the event on a queue doesn't do much > except guarentee that either it'll be delivered (sometime, > according to the time rules of OOA) or an exception is > generated. This is a very fancy architecture. Let's go back to a more typical one where there is a single event queue for a domain that simply flushes as fast as it can (checking for target instances that are active) rather than waiting for actions to complete. In single domain simulation mode once the event is placed on the queue the order is fixed. This precludes simulation where the queue is reordered to reflect delays. Even assuming the multi-queue model, I don't see how delays are properly tested. For the same stimulus event the sequence of event processing will still always be the same if the event is always placed on the queue in the action without a delay. Therefore you still can't simulate delays that change the order in which events are processed. > The whole issue is about how to set up the bridges > in a way that eliminates coupling from the domains. > The only way to do this is to say that the only > external behaviour seen by a domain is the OOA > meta model. Given previous messages I now realize you were still assuming the OOA was constructed to handle asynchronous delays. I still argue that whatever you do to do that is a manifestation of the coupling. It is just somewhat less obvious why the analyst is constructing the OOA that way. > > My first difficulty is that there might not be an obvious > > problem with the bridge > > until it gets into the field. A lot of asynchronous systems > > behave pretty > > synchronously until some bizarre situation arises. I want > > some clue in the OOA > > that this might be an issue so I have a better chance of > > preventing the problem. > > Start off from an OOA model in simultaneous time. If it > works in that sitution, then it should work in any > parallel situation. Of course, if you haven't used static > proofs, then you may not have tested the appropriate > interaction. True, but my issue here is that I want to prevent problems rather than hoping the testing is adequate to recover from them. Among other things, verifying that the model actually works in the simultaneous view is a combinatorially large problem that may not even be feasible to test exhaustively. To prevent such screwups during maintenance I want more clues in the OOA about tricky situations rather than less. > But once you find the bug, you'll be able to reproduce > it in the single-domain simulation. This one I don't buy. A single domain simulation that provides no mechanism for changing the order of events processed is unlikely to find such an error because it is not testing the true simultaneous case; it is effectively just checking one synchronous case. I think you have to have some sort of hook to simulate the delay by changing the order of events. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: RE: (SMU) Need help conceptualizing David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- > > How can we have domain pollution in an implementation??? > > If ListItem and ListItemLink are domain objects, then the > fact that they inherit from an object shared across domains > is domain pollution. I'm going to stick to my guns. It cannot possibly be domain pollution unless the sharing is visible from either domains. The sharing is defined in the bridge, and reflacted in the translation of the domains. So there is no pollution. > Let's say the application is modified > or the GUI is ported so that the relationship between Dog > and Dog Owner is now managed by an associative object in the > application. Then ListItemLink is broken. No, ListItemLink is not broken. It works just fine. Arguably, even the bridge is not broken, because the counterpart relationship is still between the relationship and the ListItemLink. This depends on how the bridge is specified. > Your argument is that the inheritance is in the > mplementation (in particular, it is a bridge implementation) > so it can be changed when the domain is re-translated. My > problem with that is that the domain translation is context > dependent -- the translation has to change when another > domain's internals are changed. Meaningful RD has to be context sensitive. Otherwise all you've got is a code generator, in which case you might as well use the Rose built-in. This doesn't mean that the generator has to change with the context. Simply that it'll do something different when its input changes. > This is why I prefer writing some context-dependent bridge > code that sits between two domain interfaces that are > invariant with context. Why would you want to write bridge code? Sure its often necessary, but we should strive to avoid it. > > The code I presented was close to the simplest possible > > mapping of the model into c++ (not necessarly the same thing > > as the simplest c++ that can implement the problem). > > It is elegant and terse, but not simple. Big difference and > one of the reasons large projects in C++ have problems. I didn't say the c++ was simple. I said the mapping was simple. Big difference. > It is academic if you are doing auto code generation, but > if you are maintaining a manually generated system it can > be a headache. Yes, the rules for manual code are different to those of generated code. Even the defintion of "good design" is different, because many of the principles of good design are aimed at maintainability. In big systems, long build times can become a problem, in which case that might be one of the requirements on your architecture. > > > Complex inheritance trees tend to be both fragile > > > and difficult to refactor when things change moderately. > > > > The tinkering is done at a meta-level, so this refactoring > > is not a problem. > > I am not sure what you mean by 'meta-level' here. That might > be true of the revision you suggested, involving MS-Object, > etc., but the original code snippets struck me as being > application specific. Even at the MS-Object level I suspect > it would be a pretty hefty meta-model. B-) Meta level means that I don't change the generated code: I change the generator. Of course the generated code is application specific. Thats what implementation is. But the underlying structure is not application specific. And there's nothing wrong with application driven rules. A typical rule is created when you say: "This generated code is inadequate for this application, how can we fix the generator". You then find a rule that creates good code for the specific application, but you attempt to describe the rule without being application specific. > > The domain is a unit of conceptual cohesion. It is meaningless to > > package it as a DLL, because the domain has no implementation > > until it's combined with other domains (notably, the architecture). > > Say, what?!? We do this all the time. It is absolutely > essential in a plug & play environment where components are > being plopped in and out of dozens of products. It also cuts > down build time when enhancing only a part of a large system. > When we translate a domain it is _always_ to a DLL and we > _always_ translate them independently. I think you missed my point. What you package in the DLL is generated code, not a domain. You have arbitrarily chosen to structure the physical distribution using domain boundaries. Dave. Subject: RE: (SMU) Why Should I? "Lynch, Chris D. SDX" writes to shlaer-mellor-users: -------------------------------------------------------------------- >Chris Curtis writes to shlaer-mellor-users: >-------------------------------------------------------------------- >An odd thought occurs to me... the Batch collection is really an >implementation artifact >of how the old systems had to deal with Transactions. On the other hand, >it has acquired >(I think) a semantic association with the concept of a Supervisor approving >Transactions >in groups. Apropos of nothing, really... just thinking at the keyboard. You pre-empted my next comment. An excellent sign--it shows that you are thinking like an analyst and not like a designer. However, my bank DP experience has been that implementation artifacts have a funny way of endearing themselves to the operational staff and insinuating themselves as "business processes". Just because something is "mere implementation" does not mean that you will able to easily get rid of it. Indeed, it may be beneficial if it helps the people do their jobs more efficiently. One caveat: don't assume that recursive design and translation are necessary to get the benefits of SMOOA. Because you seem to be building an application vital to the survival of your business and this forum does not get a lot of traditional DP questions, I should point out that... A) I think OOA (info modeling and state modeling) is a zero-risk, guaranteed payoff activity. Unconditionally recommended. B) OOD and OOP... 1) Are technology challenges for some old-line DP staffs; learning curve is an issue, and the first one or two designs can be poor. 2) Were not designed with the high-volume transaction market in mind. If raw throughput and reliability are paramount, you should be asking tough questions of your the people who sell you object technology. Ask your vendors for customer lists and applications. 3) Have played a prominent role in some large-scale failed DP projects. Ask around at banks, credit card companies, and insurance companies. See if you can learn from their mistakes. C) RD is rocket science for the newbie, and is __optional__ in SM. If you do it, get an expert to help you! As an alternative, M. A. Jackson's and J.R. Cameron's books on JSP and JSD give a great perspective on manual mapping from the object perspective to the batch processing implementation. Unfortunately they are not on the OO book lists, because they did OO modeling in 1975--before the word "object oriented" came into common use. But I think they are essential reading in your environment. They did OOA then turned around and wrote COBOL under CICS and on top of IMS and TPF. I suspect some of the big OO guys would not know much about this. Hope this helps, -Chris ----------------------------------------------------------------------- Chris Lynch Abbott AIS San Diego CA lynchcd@hpd.abbott.com "If you're as clever as you can be when you design it, how will you ever debug it?" Kernighan and Plauger ----------------------------------------------------------------------- Subject: Re: (SMU) Need help conceptualizing lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > I'm going to stick to my guns. It cannot possibly be domain > pollution unless the sharing is visible from either domains. > The sharing is defined in the bridge, and reflacted in the > translation of the domains. So there is no pollution. My argument is that if the incoming external interface of GUI has to change based upon changes to the internals of another domain, then that is prima facie evidence of domain pollution. I don't disagree that the problem is in the implementation rather than the model. But what it reflects is unnecessary coupling between domains, which is what prompted this thread in the first place. You can regard this as a different sort of domain pollution, but it is still domain pollution in my view because it affects the characteristics of a domain at the DC model level. I also believe it violates the way RD is supposed to work (see below) at the domain level. > No, ListItemLink is not broken. It works just fine. Arguably, > even the bridge is not broken, because the counterpart > relationship is still between the relationship and the > ListItemLink. This depends on how the bridge is specified. It is broken because it has to be re-implemented even though the computing environment has not changed. > Meaningful RD has to be context sensitive. Otherwise all you've > got is a code generator, in which case you might as well use > the Rose built-in. This crystallizes a niggling disagreement we have had for years. I agree that the RD needs to be context sensitive. However, I feel that sensitivity is limited to the computing environment rather than the application. When one defines translation rules for a domain they should address optimization for the computing environment and should be independent of the models and implementation of other domains. The only place that application dependence should come into play is in the bridge code that glues domain interfaces together and it should only understand what is exposed in those interfaces, not the domain internals. > > This is why I prefer writing some context-dependent bridge > > code that sits between two domain interfaces that are > > invariant with context. > > Why would you want to write bridge code? Sure its often > necessary, but we should strive to avoid it. On the contrary, I view it as essential to decoupling domains. > > > The domain is a unit of conceptual cohesion. It is meaningless to > > > package it as a DLL, because the domain has no implementation > > > until it's combined with other domains (notably, the architecture). > > > > Say, what?!? We do this all the time. It is absolutely > > essential in a plug & play environment where components are > > being plopped in and out of dozens of products. It also cuts > > down build time when enhancing only a part of a large system. > > When we translate a domain it is _always_ to a DLL and we > > _always_ translate them independently. > > I think you missed my point. What you package in the DLL is > generated code, not a domain. You have arbitrarily chosen to > structure the physical distribution using domain boundaries. What we package in the DLL is a decoupled domain. That allows us to swap that domain among applications in a given computing environment without re-translation (or even recompilation). In my view that is exactly what should happen when I translate a domain. So long at the result of translation remains in that environment I should be able to move it As Is between applications by simply providing bridge code. If I can't do that, then there is something wrong with the domain translation. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Why Should I? lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lynch... > One caveat: don't assume that recursive design and translation are > necessary to get the benefits of SMOOA. Because you seem to > be building an application vital to the survival of your business and > this forum does not get a lot of traditional DP questions, > I should point out that... A quibble but any process that converts an S-M OOA to code is, by definition, translation. And RD is so loosely defined publicly that almost any process might qualify, including elaboration. However, your point is well taken -- there is no need for it to be complicated. On our first pilot project none of the people had taken the RD course so we just coded from the bubbles & arrows. It wasn't very sophisticated, but it worked. > A) I think OOA (info modeling and state modeling) > is a zero-risk, guaranteed payoff activity. Unconditionally > recommended. Absolutely. > B) OOD and OOP... > 1) Are technology challenges for some old-line DP staffs; > learning curve is an issue, and the first one or two > designs can be poor. I mildly disagree here. They can be challenging for non-DP staffs or anyone else who has not done OOD/P before. B-) At another level I think an S-M OOA provides a basis that puts covers over many of the worst pits. A typical S-M OOA does not require and usually discourages using lots of fancy OOPL features that can get one in trouble. Typically you don't have very complex inheritance trees and functional inheritance is virtually non existent. The use of state machines forces breadth-first flow of control rather than depth-first. You can use dynamic polymorphism in bridges but you don't really need to do so. And I haven't overloaded an operator in five years. So if you simply code from the OOA intuitively, you are less likely to get in trouble than you might by elaborating into a more complex OOD model. > 2) Were not designed with the high-volume transaction > market in mind. If raw throughput and reliability are > paramount, you should be asking tough questions of > your the people who sell you object technology. > Ask your vendors for customer lists and applications. I would argue that S-M was designed to handle this. You can pop those third party domains into the DC without missing a step. Then the bridging paradigm invites you to handle the interface correctly with only minor coaching from the sidelines. > 3) Have played a prominent role in some large-scale > failed DP projects. Ask around at banks, credit card companies, > and insurance companies. See if you can learn from their > mistakes. I fervently believe that S-M provides an inherently simpler approach to OT that avoids a lot of traditional problems (e.g., scaling of poorly designed OOPLs like C++). However, I also think that two things are crucial to early success: ubiquitous work product reviews and some professional hand holding. I don't advocate for full time consultants, but I think they are very useful for reviewing projects on a periodic basis. This is cheap insurance against moving down the wrong paths. > C) RD is rocket science for the newbie, and is __optional__ in SM. > If you do it, get an expert to help you! I look at RD a little differently. I see it as a gradation of sophistication. At one end you sit down at a keyboard with some diagrams and start writing code from them, solving problems locally and intuitively as they come up. At the other end of the spectrum are the highly sophisticated, automated systems that Dave Whipp advocates. But I think that everything in between can also qualify as RD. I certainly agree with your implied point, though, that if one is doing manual generation or just embarking upon developing an architecture that KISS should apply initially. Provide only a very basic suite of tools and then grow them over time as experience is gained. > As an alternative, M. A. Jackson's and J.R. Cameron's books on JSP and > JSD > give a great perspective on manual mapping from the object perspective > to the batch processing implementation. Unfortunately they are not > on the OO book lists, because they did OO modeling in 1975--before > the word "object oriented" came into common use. But I think they > are essential reading in your environment. They did OOA then turned > around and wrote COBOL under CICS and on top of IMS and TPF. > I suspect some of the big OO guys would not know much about this. Many, many ago (back when business systems were called MIS) I used Jackson's method. At the time I read an article by someone who had compared methodologies by having different gurus solve the same problem. The author noted that everyone's design looked pretty much the same except Michael's but didn't quite know what to make of his, other than the fact that it seemed to be simpler. Everyone else was doing SA and very few people had even heard of OT. With a couple of decades of hindsight I agree you could make a case for the fact that JSx were the harbingers of OT in SA clothing. One of the nice things about S-M OOA is that it can be translated fairly easily into procedural languages. In fact our current code generator only does straight C. It might be a bit trickier in languages w/o pointers but not terribly so. Marginally related apocryphal aside... Back in the mid '80s we were trying to do OT in BLISS. This was tricky in a low level system programming language where the only data type is integer. OTOH, BLISS had a marvelous macro facility (the original BLISS authors once solved the Towers Of Hanoi problem at compile time using just preprocessor macros; they also built a full BASIC interpreter using just run time macros). This allowed us to do a lot of RD-like things using the macro facilities so that one coded using OO constructs that the macros converted to BLISS code. It was an interesting adventure, though the project got killed for marketing reasons. I just wish that I had known about S-M back then -- we had an elegant system for translating the wrong things because we had no clue how to write an OO application. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Need help conceptualizing Simon Wright writes to shlaer-mellor-users: -------------------------------------------------------------------- Subject: Re: (SMU) Need help conceptualizing Simon Wright writes to shlaer-mellor-users: -------------------------------------------------------------------- > lahman writes to shlaer-mellor-users: > What we package in the DLL is a decoupled domain. That allows us to swap > that domain among applications in a given computing environment without > re-translation (or even recompilation). In my view that is exactly what > should happen when I translate a domain. So long at the result of > translation remains in that environment I should be able to move it As Is > between applications by simply providing bridge code. If I can't do that, > then there is something wrong with the domain translation. That's _your_ architecture, and there's absolutely nothing wrong with that if it's what you need. But (what Campbell McC has been telling us about for a while) there is nothing to prevent my code generation from being a good deal more agressive if that is what I need. I can eliminate loads of bridge code by providing tabular mappings (Campbell's half tables) and having code generation write the interfaces for me. No need for domain interface specs (was that your phrase?) .. -- Simon Wright Email: simon.j.wright@gecm.com Alenia Marconi Systems Voice: +44(0)2392-701778 Integrated Systems Division FAX: +44(0)2392-701800 Subject: RE: (SMU) Need help conceptualizing David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Lahman wrote: > Responding to Whipp... > My argument is that if the incoming external interface of GUI > has to change based upon changes to the internals of another > domain, then that is prima facie evidence of domain pollution. If this were true, I might agree. But what is changing? My UI has a very simple interface: You can create/delete list items; you can create/delete links between them. You can select/deselect a list item. You can see when a list item is highlighted. Which of these interfaces is changed by your suggested change in the application model? > I don't disagree that the problem is in the implementation > rather than the model. But what it reflects is unnecessary > coupling between domains, which is what prompted this thread > in the first place. But what coupling? The implementation *that I chose* happens to be one that does not promote binary plug&play. But that is not visible in the domain analysis (except the architecture domain, of course). > You can regard this as a different sort > of domain pollution, but it is still domain pollution in my > view because it affects the characteristics of a domain at > the DC model level. It may be coupling in the design. It might be poor cohesion. These are terms used to descibe characteristics of the implementation. It has no effect on the domain analysis. > I also believe it violates the way RD > is supposed to work (see below) at the domain level. You seem to have a very limited view. RD does not require structural continuity between the DC and the code. It does not require binary plug&play of translated domains. It does not require the domain separation to be visible in the generated code. > > No, ListItemLink is not broken. It works just fine. Arguably, > > even the bridge is not broken, because the counterpart > > relationship is still between the relationship and the > > ListItemLink. This depends on how the bridge is specified. > > It is broken because it has to be re-implemented even though > the computing environment has not changed. The environment of a domain is not limited to the 'platform domain'. A domain's environment is every other domain, and every bridge, in the system. A change to any of these may require the entire system to be re-translated. There are good reasons for limiting the scope of re-builds, but these reasons are not requirements in the general case. > However, I feel that sensitivity is limited to the computing > environment rather than the application. When one defines > translation rules for a domain they should address > optimization for the computing environment and should be > independent of the models and implementation of other > domains. Opimising a single domain is very limited. If you are serious about optimisation then you usually need to increase coupling in the implementation. A GUI is a bad example of this, because optimisation of a GUI is not generally useful. But if I have a very busy bridge in the system, then eliminating the bridge in the implementation can have dramatic results. > > Why would you want to write bridge code? Sure its often > > necessary, but we should strive to avoid it. > > On the contrary, I view it as essential to decoupling domains. Bridges decouple the domains. Bridge *code* simply couples a bridge to its implementation, and makes it difficult to merge the bridge into the translated domains. (Unless, that is, your bridge code is also translated) > What we package in the DLL is a decoupled domain. That > allows us to swap that domain among applications in a > given computing environment without re-translation (or > even recompilation). In my view that is exactly what > should happen when I translate a domain. So long at the > result of translation remains in that environment I > should be able to move it As Is between applications > by simply providing bridge code. If I can't do that, > then there is something wrong with the domain translation. You are describing the requirements of your specific product. You binary reuse requrements are more important than raw performance. But imagine you have a 3d graphics domains and a vector-mathematics domain. Separating these two domains into isolated DLLs would be unacceptable -- you'd really want to use inline expansions of the maths for the graphics. An if you consider an implementation onto an FPGA, or even an ASIC, I doubt that you'll be able to package it as a DLL. Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: RE: (SMU) Why Should I? David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman writes to shlaer-mellor-users: > At the other end of the spectrum are the highly sophisticated, > automated systems that Dave Whipp advocates. A clarification: . I do favor automation. . I don't necessarily favour sophistication. A lot of hand crafting goes into translators, so the XP principles of YAGNI and DTSTTCPW are very important. (YAGNI == You Ain't Gonna Need It, DTSTTCPS = Do The Simplest Thing That Could Possibly Work. See http://c2.com/cgi/wiki?ExtremeProgramming) Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: Re: (SMU) Need help conceptualizing Allen Theobald writes to shlaer-mellor-users: -------------------------------------------------------------------- Mrs. Lahman and Whipp, I can follow the conversation along when talking about Mr. Whipp's code. However, when Mr. Lahman talks about alternatives I start losing it! :^) I think I understand, but I am not sure. Mr. Lahman, would you care to code up an alternative to illustrate your differing opinion? Specifically address your statements: "This is why I prefer writing some context-dependent bridge code that sits between two domain interfaces that are invariant with context." "It is academic if you are doing auto code generation, but if you are maintaining a manually generated system it can be a headache." It's obvious that there are widely varying opinions. I don't mind comparing and contrasting the two (or more!). Following along as best as I can! :^) Kind Regards, Allen Theobald Nova Engineering, Inc. Subject: Re: (SMU) Bridging (nee Need help conceptualizing) lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Wright... > > lahman writes to shlaer-mellor-users: > > What we package in the DLL is a decoupled domain. That allows us to swap > > that domain among applications in a given computing environment without > > re-translation (or even recompilation). In my view that is exactly what > > should happen when I translate a domain. So long at the result of > > translation remains in that environment I should be able to move it As Is > > between applications by simply providing bridge code. If I can't do that, > > then there is something wrong with the domain translation. > > That's _your_ architecture, and there's absolutely nothing wrong with > that if it's what you need. But (what Campbell McC has been telling us > about for a while) there is nothing to prevent my code generation from > being a good deal more agressive if that is what I need. I can > eliminate loads of bridge code by providing tabular mappings > (Campbell's half tables) and having code generation write the > interfaces for me. No need for domain interface specs (was that your > phrase?) .. Perhaps I should have said, "...by simply generating bridge code." I have nothing against schemes that examine the relevant domain OOAs and automatically generate bridges for a particular application. Greg Eakman, who did most of our architectural stuff, gave a paper on doing exactly that a couple of years ago ('97?) at E-SMUG. What I think should be true, though, is that in a given computing environment a domain's internals should only need to be translated once, regardless of how many applications might use it in that environment. In practice there are limitations in the computing environment (e.g., the mechanism for exporting interfaces from an NT DLL is a static definition) that force the architect to make tradeoffs where this may not be true. Nonetheless I see such decoupling to be the main goal of the bridging paradigm so that one should strive to be compliant. I don't think 'domain interface specs' was my phrase, at least not recently. But I don't think they are eliminated just because one automatically generates the bridge. What I view as a 'domain interface spec' is the detailed bridge description from the Domain Chart. This describes the requirements on the service domain and is unavoidable. It is also the thing that tells the architect *which* interfaces need to be written for you. Having said all this, there is a second thing that mitigates for the view that domains can be re-translated per application, aside from the flawed computing environment I mentioned above. The DC bridge descriptions are written on a per-application basis. Thus the requirements on a reused service domain might be quite different in two applications. If they require changing the domain's internals at the OOA level (i.e., they represent new requirements on the domain's mission), then one has to re-translate anyway. Unfortunately it is also possible that even if such requirements don't cause changes to the domain's internals at the OOA level, they may require inconvenient changes to the supposedly invariant domain interfaces. That is, one has the classic object reuse problem where it is difficult to design an interface that will serve all clients. S-M has come much closer than other methodologies to providing a true component reuse paradigm, but it is still not quite there yet. S-M places a lot of emphasis at the DC level on defining level of abstraction, subject matter, mission, etc. This is clearly going in the right direction of defining what you can expect from a domain. S-M also provides the wormhole discipline that severely narrows what the interface can look like and do. But we still don't have a protocol for idiots to use when defining invariant domain interfaces that guarantees that you really can move a domain from one application to another easily through bridging. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: RE: (SMU) Why Should I? "Lynch, Chris D. SDX" writes to shlaer-mellor-users: -------------------------------------------------------------------- responding to Lahman.... >> One caveat: don't assume that recursive design and translation are >> necessary to get the benefits of SMOOA. Because you seem to >> be building an application vital to the survival of your business and >> this forum does not get a lot of traditional DP questions, >> I should point out that... > >A quibble but any process that converts an S-M OOA to code is, by definition, >translation. And RD is so loosely defined publicly that almost any process might >qualify, including elaboration. However, your point is well taken -- there is no >need for it to be complicated. For purposes of making my point, by "RD & translation" I refer to a very "mechanical", no-brainer process of going from models to code, whether the engine of that mechanism is a person or a computer. I am not the first to note that the typical reusable, all-purpose architecture (what Dave Whipp has called "OOA of OOA") often yields the slowest of the potential designs. I offered this as a caveat to our transaction processing colleague because "automation" is often viewed as a core benefit of the method. My point is that unchecked desire to automate system construction can sometimes lead to very long efforts and uneconomical systems. Some evangelists' make blithe claims in the vein of "oh, don't worry about that; that's just a mapping..." Sometimes those mappings are awfully difficult. >> 2) Were not designed with the high-volume transaction >> market in mind. If raw throughput and reliability are >> paramount, you should be asking tough questions of >> your the people who sell you object technology. >> Ask your vendors for customer lists and applications. > >I would argue that S-M was designed to handle this. You can pop those third party >domains into the DC without missing a step. Then the bridging paradigm invites you >to handle the interface correctly with only minor coaching from the sidelines. We must be thinking of different service domains. I was thinking of the archaic file systems and database managers still found on modern mainframes. When I had to go from the relational data model of the IM to this environment, "bridging" is not the term that comes to mind-- "torture" is ! >With a couple of decades of >hindsight I agree you could make a case for the fact that JSx were the harbingers >of OT in SA clothing. It's an interesting mapping ! :-) Regards, Chris --------------------------------------------------- Chris Lynch Abbott AIS San Diego CA lynchcd@hpd.abbott.com "If you're as clever as you can be when you design it, how will you ever debug it?" Kernighan and Plauger --------------------------------------------------- Subject: Re: (SMU) Subject Matter are Single Domain Neil Earnshaw writes to shlaer-mellor-users: -------------------------------------------------------------------- David.Whipp@infineon.com wrote: I've extracted these three principles from your first posting on implicit bridges. > The domain should be its own universe. > ... you should never solicit ... an action. > > External inputs are only visible from an SDFD Is it fair to extract these points and if so is it correct to assume the following: 1. All request wormholes, transfer vectors, return coordinates, synchronous and asynchronous return wormholes, counterpart relationships and counterpart terminators are bridge artifacts and as such have no place in any of the models that describe a domain. 2. There are no external entities/terminators in a domain. If the domain knows about them, they are brought into the domain's universe as objects. In the ODMS, there is just the Robot - the Robot Hardware is nowhere to be seen. 3. There are no outgoing events because there is no 'out' for them to go to. In the juice plant OCM, events directed to the Operator terminator are replaced by events directed at an Operator object. 4. Domain synchronous services can only be invoked by bridge code as a result of mapping some event or process in another domain to a DSS. 5. Incoming events can only be generated by bridge code as a result of mapping some event or process in another domain to an external event. Neil Earnshaw Object Software Engineers Ltd Subject: RE: (SMU) Subject Matter are Single Domain David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Neil Earnshaw replying to me: > I've extracted these three principles from your first posting > on implicit bridges. > (1) > > The domain should be its own universe. (2) > > ... you should never solicit ... an action. (3) > > External inputs are only visible from an SDFD > > Is it fair to extract these points ... The first 2 I would answer with a definite yes. The last one is slightly more problematic. When you treat a domain as an architecture, it is common for a bridge to create/delete instances in the server. Constructing SDFDs for every object in a domain seems to be a waste of time. There are a number of solutions to this. One is a "lightweight" SDFD mechanism: the bridge sees a normal SDFD; but the single- process, synchronous, SDFD has a lightweight definition table. Another is to directly support the concept of counterparting in the metamodel and to support some type of "public" property on some objects in the server domain. A third approach is to say that a bridge always has a "whitebox" viewpoint on a domain, but a domain model can declare some objects/events/etc as "protected" (so the modeler can guarentee there will be no surprizes). I actually quite like the third option: it allows the modeler to ignore the interfacing issue and concentrate on the behaviour of the domain. > ... and if so is it correct to assume the following: > 1. All request wormholes, transfer vectors, return > coordinates, synchronous and asynchronous return > wormholes, counterpart relationships and counterpart > terminators are bridge artifacts and as such have no > place in any of the models that describe a domain. Yes > 2. There are no external entities/terminators in a domain. > If the domain knows about them, they are brought into the > domain's universe as objects. In the ODMS, there is just > the Robot - the Robot Hardware is nowhere to be seen. Yes, but don't limit yourself to objects. Events, attributes, transitions, processes, etc. may all have counterparts. > 3. There are no outgoing events because there is no 'out' for > them to go to. In the juice plant OCM, events directed to > the Operator terminator are replaced by events directed at an > Operator object. Do we need an operator object? It seems to me that 3 of the outgoing events could be modeled as attributes, which can be directly counterparted. (To think of it another way, the bridge can observe their values by intercepting the 'set' accessors). Incoming events can come in via an SDFD, and don't need an object. If you want to avoid single-process SDFDs, then we might allow the events to be generated directly from the bridge. The 4th outgoing event, OP4: "Get canning decisions", with its CO5 reply looks very dubious. As I mentioned in another post, there does seem to be a need for a "random" process within a model to get information that is not known to the domain. ("random", because the domain cannot know, or knowingly influence, the returned value. The former would imply that it did know the information, and the latter would imply domain pollution). The random generator is, of course, a wormhole: but the domain shouldn't think of it as a link to another domain. I don't know what a "planning decision" is. Maybe there is a state whose purpose is to wait for the plan; maybe a state contains a random process. I don't know the problem. But I'd want to avoid soliciting the external (CO5) event. > 4. Domain synchronous services can only be invoked by bridge > code as a result of mapping some event or process in another > domain to a DSS. Yes and no. If you have a realised domain, then it won't have OOA elements to map. Oh, and I'd like to get away from the concept of "bridge code" ... its simply a bridge. > 5. Incoming events can only be generated by bridge code as > a result of mapping some event or process in another domain > to an external event. Depends if we allow incoming events!. My caveats for the extracted principle (3) and for your assumption (4) apply. Dave. Subject: Re: (SMU) Need help conceptualizing lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > > My argument is that if the incoming external interface of GUI > > has to change based upon changes to the internals of another > > domain, then that is prima facie evidence of domain pollution. > > If this were true, I might agree. But what is changing? You depend upon the internals of another domain (i.e., specific relationships among specific objects) when implementing the ListItemLink object, which you said you regard as a GUI domain object. I was hypothesizing that *if* the internals of that other domain change, then ListItemLink's implementation must change. > > I don't disagree that the problem is in the implementation > > rather than the model. But what it reflects is unnecessary > > coupling between domains, which is what prompted this thread > > in the first place. > > But what coupling? The implementation *that I chose* happens > to be one that does not promote binary plug&play. But that > is not visible in the domain analysis (except the architecture > domain, of course). The internal implementation of a domain object (ListItemLink) depends upon the OOA and internal implementation of another domain. In my view that is heavy duty coupling. While the problems with such coupling are most commonly shown in porting domains, they could appear whenever one does any sort of maintenance on one domain that requires requires changes to the other domain as a result of the coupling. > > I also believe it violates the way RD > > is supposed to work (see below) at the domain level. > > You seem to have a very limited view. RD does not require > structural continuity between the DC and the code. It > does not require binary plug&play of translated domains. > It does not require the domain separation to be visible > in the generated code. This seems to be the heart of our difference. I see the whole point of the bridging abstraction to be decoupling of the domains, thus allowing domains to stand alone. This is certainly true of the OOA -- if one domain's model depends upon another domain's model something is seriously wrong. It seems logically inconsistent to allow such coupling in the RD. It seems to me that one should strive to preserve the same decoupling at the OOD level. One way to do this is to adopt the bridge model where each domain has an invariant external interface for a particular computing environment and the bridge glue code simply joins these. If one adopts this view you can eliminate a large amount of coupling at no loss in efficiency because the domain's external interfaces are already adopted to the computing environment. The acid test is the answer to the question: would you do this if the system were manually coded so that you would have to continue maintaining the same code? The answer would be No because this level of coupling would be an unnecessary maintenance headache. When you are using automatic code generation this tends to become as academic as the rat's nest of Assembly branches that the compiler generates for your C++ code. But I have two problems with this. First, my understanding is that we have been talking about a manually generated system (i.e., I don't think Theobald has a code generator). Second, I don't think it is as completely separated as the Assembly branch analogy. Unless you have a *very* fancy configurable code generator I would bet that you would have to go tinker with the architecture before being able to generate ListItemLink's implementation again after you, say, replaced the relationship in the other domain with two relationships. Therefore the coupling is probably relevant even during automatic generation. > The environment of a domain is not limited to the 'platform > domain'. A domain's environment is every other domain, and > every bridge, in the system. A change to any of these may > require the entire system to be re-translated. There are > good reasons for limiting the scope of re-builds, but these > reasons are not requirements in the general case. I agree it is not limited to the 'platform domain' -- by definition every bridge must understand the external interfaces of each domain. But as I read your statements here you seem to be saying that the internals of a domain implementation must understand the context of other domains. In particular, ListItemLink is a GUI domain object and it depends upon a specific model of the other domain. This is the level of coupling that I reject. > Opimising a single domain is very limited. If you are serious > about optimisation then you usually need to increase coupling > in the implementation. A GUI is a bad example of this, because > optimisation of a GUI is not generally useful. But if I have > a very busy bridge in the system, then eliminating the bridge > in the implementation can have dramatic results. I am not convinced of this. It would certainly be true if you were limited to a single mechanism to reduce coupling (e.g., dynamic polymorphic interfaces in C++). But RD provides a very powerful technique to doing this because the domain interfaces can be implemented specifically to address such performance problems by selecting the best available mechanism for the computing environment. It is not clear to me that one could not eliminate any overhead for data passing by using a proper mechanism. >From another view, the OOA is very specific about what bridges can do. They pass data in messages. Moreover, it is scalar data. That is not a complex thing to optimize. > Bridges decouple the domains. Bridge *code* simply couples a > bridge to its implementation, and makes it difficult to merge > the bridge into the translated domains. (Unless, that is, your > bridge code is also translated) This one has me confused again. The first sentence seems to be agreeing with my position. B-) Also, I have always regarded bridges as being part of the translation. CASE tools providing action languages that make it easy for Analysts to write bridge code, but I still regard that as a translation activity. > > What we package in the DLL is a decoupled domain. That > > allows us to swap that domain among applications in a > > given computing environment without re-translation (or > > even recompilation). In my view that is exactly what > > should happen when I translate a domain. So long at the > > result of translation remains in that environment I > > should be able to move it As Is between applications > > by simply providing bridge code. If I can't do that, > > then there is something wrong with the domain translation. > > You are describing the requirements of your specific > product. You binary reuse requrements are more important > than raw performance. I don't see it that way. I see this as fundamental to the bridging paradigm. You have an architecture that should be common to all applications in a computing environment. You have translation rules that express a specific OOA's logic in that environment's architectural mechanisms. If the logic doesn't change and the architecture doesn't change, why should the domain have to be re-translated for different applications? Something is wrong with that picture. > But imagine you have a 3d graphics > domains and a vector-mathematics domain. Separating these > two domains into isolated DLLs would be unacceptable -- > you'd really want to use inline expansions of the maths > for the graphics. You can do it by having each domain's external interface read/write shared memory. But this is really an issue about whether a computing environment has adequate mechanisms to do what the program needs to have done (which is why game developers and Microsoft are barely on speaking terms). If the environment doesn't, then the translation is going to have to create some dirty code. [We once had to shorten error messages, map global data (it was an unenlightened era when the most of the code was written) into union arrays, and bypass the overlay processor by updating frame pointers on the fly because we broke DEC's RSX-11 linker for the PDP11 with the size of the application.] > An if you consider an implementation onto an FPGA, or even > an ASIC, I doubt that you'll be able to package it as a > DLL. True. But you'll be able to use it without modification in any system with that hardware. The FPGA is just the package that you choose for the computing environment. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Why Should I? lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > . I don't necessarily favour sophistication. A lot of hand > crafting goes into translators, so the XP principles of > YAGNI and DTSTTCPW are very important. > > (YAGNI == You Ain't Gonna Need It, > DTSTTCPS = Do The Simplest Thing That Could Possibly Work. > See http://c2.com/cgi/wiki?ExtremeProgramming) That explains a lot. B-) But OUTG has been going on with the XP Wars for over a year, so let's not go there. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Need help conceptualizing lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Theobald... > Mrs. Lahman and Whipp, I am hoping this just didn't come out the way you intended. B-) > Mr. Lahman, would you care to code up an alternative to illustrate > your differing opinion? Specifically address your statements: I don't do code; I'm a megathinker. Besides, you have to have a masochist streak to put quick code snippets up for public scrutiny. [Years ago P.J.Plauger had a column in "Computer Language" where he was including example code snippets. He got critiqued regularly in the Letters section every month for the errors, style, etc. The last one was an example defending GOTOs because they sometimes made the code more 'clear'. The example had about 12 lines of which at least 75% had GOTOs. It was absolutely the ugliest code I have ever seen and it had at least two errors in it. He got trashed so badly for that one that he wrote an entire column defending himself. So far as I know he has never included a code snippet in an article since. But I digress...] > "This is why I prefer writing some context-dependent bridge code > that sits between two domain interfaces that are invariant with > context." The issue at hand was how to get information about which dogs were owned by which owners. Let's assume I have a Ownership Screen that displays two lists. I might also have a List object with instances for a list of dogs and a list of owners. Each would have ListItems related to them where each ListItem carried display information about a specific dog or owner. At this point Whipp and I part company because I would simply have a relationship between ListItems to reflect ownership. The problem is to instantiate the relationship so that when one ListItem is selected from, say, the list of dogs in Ownership Screen that the corresponding owner ListItem would be highlighted. [At this point it is worth noting that the relevant attribute in ListItem is probably just something generic like Name. There is probably a label in Ownership Screen that identifies each list as having dogs or owners, but the naming semantics is mostly a convenience for the analyst -- dogs and owners probably don't matter much semantically in the GUI domain. What we really have is two types of text strings that are related and need to be highlighted.] Without worrying about which object is active and manages the highlighting, let's just assume some object is in charge and that it has a state action that gets executed when an incoming event from the Window Manager announces a new selection from one of the lists. The way I would probably do this is to send a event in that action to a wormhole asking for the corresponding Name. Let's say this is event E1 and it has a data packet with the Name of the highlighted dog (assuming a dog name was clicked) and some identifier, 'Q', to indicate Name is from the list of dogs. Let's further assume that the response will be an asynchronous event, E2, that will trigger the next transition in the object to a state that will extract the owner Name from the data packet and cause the relevant ListItem to be highlighted in the display. Now that we have the scene set, we can try a little pseudo code. The GUI domain will have an interface to the external world. I'll make a class for this in the implementation and call it I1. It will have two methods: send_E1 and receive_E2. class I1 { public: void send_E1 (char* name, char* item_type); void receive_E2 (char* name); } And in the I1.cpp file we might have something like: void I1::send_E1 (char* name, char* item_type) { call B1_D::bridge_for_E1 (name, item_type); } void I1::receive_E2 (char* name) { (1)create an event, E2, with 'name' in the data packet. (2)place E2 on the GUI domain's event queue. } Now to connect the GUI to some other application domain, D, we might have a class B1_D that had a method bridge_for_E1, as shown above. The implementation of bridge_for_E1 might look something like: B1_D::bridge_for_E1 (char* name, char* item_type) { (1)Change 'item_type' from 'Q' to 'D' via some lookup table because we know domain D will only associate 'D' with dogs. (2)do some housekeeping to know what routine to call (I2::receive_E2) for the response using the 'return_handle'. (3)call I2::request_dog_owner (name, item_type, return_handle); } where class I2 is the class for the external interface to domain D and request_dog_owner is a method that represents a meaningful request for that domain. When D gets through figuring out who the owner is, the domain will generate an event to a wormhole, K2. To handle this we might have the routines I2::request_dog_owner (char* name, char* item_type, int return_handle) { (1)save 'return_handle' somewhere in the interface (2)construct and event, K1, with 'name' and 'item_type' (3)place K1 on domain D's event queue } I2::send_K2 (char* name) { (1)Find 'return_handle' since this will be an asynchronous response associated with this request. (The domain interface must known which communications are responses to external requests.) (2) call B1_D:bridge_for_K2 (name, return_handle) } B1_D:bridge_for_K2 (char* name, int return_handle) (1)'return_handle' will identify I1::receive_E2 as the correct method in GUI's interface to invoke for K2 as a response. (2)call I1::receive_E2 (name) } If I took some time I could surely come up with some better naming conventions. However, as I am sure you can see I1 and I2 have no dependence upon one another. The only class that knows about both domains specifically is B1_D and it only knows about the interfaces I1 and I2. This allows I1 and I2 to be completely invariant, regardless of the application -- only B1_D has to be modified if the names change or even if there are differences in the interfaces (e.g., if B1_D::bridge_for_E2 has to call two interface methods, one for the type and one for the name). This is basically the point of the comment you quoted; I1 and I2 are not application dependent so the domains can be translated once for a platform. [Not strictly true in this case because both I1 and I2 have to agree on a class name for the bridge, B1_D. You can get around this, though, with a little thought.] The price for this is twofold. First, you have to write B1_D every time you replace GUI or D. It will usually be pretty trivial, but it is code that is usually unique to the application. The other price, in this example, is the overhead for all those context switches as the methods are called. [Note, BTW, that this indirection is not unlike that advocated by gurus when using dynamic polymorphism as a decoupling agent.] This is what Whipp eliminated by introducing ListItemLink and, effectively, sharing implementations through inheritance -- the I1, I2, and B1_D classes are nowhere in evidence. My response is that this is a simplistic example to model the idea of bridges having three components -- two interfaces and glue code -- and that this sort of problem can be addressed in the implementations if I1 and I2 in a particular computing environment. For example, they could just read/write to shared memory via some predefined mapping and protocol. Then the shared memory effectively becomes B1_D. > "It is academic if you are doing auto code generation, but if you > are maintaining a manually generated system it can be a headache." My problem is with the implementation coupling. Since the coupling exists only at the implementation level of the OOD it does not affect the inherent logic of or relationships between domains in the OOA. If you have automatic code generation, that coupling does not matter much because when you change models or port to another platform you just rebuild it from scratch with whatever changes are necessary. It is much like the awful Assembly code a C++ compiler generates -- you would never dream of *writing* Assembly that way, but so long as it gets rebuilt correctly ever time you recompile and you never have to look at it, who cares? This is not the case with manually generated systems where any change has to be maintained in the code itself. Now heavy coupling between domains can be a major pain because now that 'Assembly' isn't hidden away and you have to work directly with it. > It's obvious that there are widely varying opinions. I don't mind > comparing and contrasting the two (or more!). Like anything else is software, there are always several ways to do something. Much of my disagreement with Whipp stems from a different viewpoint. I tend to look at an application from an abstract, generic stance while Whipp has a much more code-centric view, which is why he writes code examples to prove a point while I argue aesthetics. The only really fundamental thing we seem to be currently disagreeing about is the level of coupling between domains to be allowed in the implementation. And I suspect that can only be resolved by some Senior Guru Wisdom because I don't think the RD is defined well enough yet. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Why Should I? lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lynch... > >I would argue that S-M was designed to handle this. You can pop those > third party > >domains into the DC without missing a step. Then the bridging paradigm > invites you > >to handle the interface correctly with only minor coaching from the > sidelines. > > We must be thinking of different service domains. I was thinking of the > archaic file systems and database managers still found on modern > mainframes. When I had to go from the relational data model of the > IM to this environment, "bridging" is not the term that comes to mind-- > "torture" is ! I guess so. I was actually thinking of something like an Oracle RDB. My argument is that S-M bridges provide a ready-made discipline for placing wrappers and APIs around third party software. However, I agree that if the wrapped component was not designed to be wrapped, it can be an adventure. A lot of vendors claim they provide COM or CORBA interfaces when, in fact, interoperability is possible only in an extremely limited fashion. How many commercial applications tout interoperability when they really mean, "We interoperate so long as you give us the hooks to let us run the show," or, "What we really mean is that you can spawn us as a separate task, we'll read the data file you create, and, if we're in a good mood, we'll let you know when we are done."? But I digress... -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Need help conceptualizing chris.m.moore@gecm.com writes to shlaer-mellor-users: -------------------------------------------------------------------- > lahman writes to shlaer-mellor-users: > > Responding to Theobald... > Besides, you have to have a masochist streak to put > quick code snippets up for public scrutiny. But I see that doesn't extend to pseudocode. :) > The issue at hand was how to get information about > which dogs were owned > by which owners. Let's assume I have a Ownership > Screen that displays > two lists. I might also have a List object with > instances for a list of > dogs and a list of owners. Each would have ListItems > related to them > where each ListItem carried display information about > a specific dog or > owner. At this point Whipp and I part company because I would simply > have a relationship between ListItems to reflect >ownership. Whip is right when he points out that this is domain pollution. It fails the domain substitution principle. When you give the GUI domain to your co-worker to use in another project he's bound to come and ask you what the ownership relationship between ListItems is. I think the confusion arises because people confuse the GUI domain with what they want their GUI to do. My GUI domain contains Windows, Widgets, Lists, ListItems etc. and if I'm feeling adventurous Callbacks. But the instantiation of the domain at runtime consists of 2 Lists containing ListItems labeled with owner and dog names. When the user clicks a ListItem containing an owner's name a callback (another name for a bridge) is invoked which invokes the dogs_owned_by(owners_name) service which returns a set of Dog objects. Then we loop through the set of Dog objects and highlight the ListItems in the dog-based List whose label matches the Dog's name. Hence >> context-dependent bridge code >> that sits between two domain interfaces that are >> invariant with context. Hmm, would the half-tables contain a mapping between Dog's names and ListItem widget ids? Removes the need to search the List for the right ListItem. Cool. -- Chris M. Moore Senior software engineer Alenia Marconi Systems Portsmouth in the UK Subject: RE: (SMU) Need help conceptualizing David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Lahman wrote: > Each would have ListItems related to them > where each ListItem carried display information about a > specific dog or > owner. At this point Whipp and I part company because I would simply > have a relationship between ListItems to reflect ownership. My relationship is almost the same. I have a relationship with an assoc object. The relationship is symmetric-reflexive Mc. The verb phraze is something like "selection causes highlighting of". It doesn't reflect ownership, but its a simple relationship. > Without worrying about which object is active and manages the > highlighting, let's just assume some object is in charge and > that it has > a state action that gets executed when an incoming event from > the Window > Manager announces a new selection from one of the lists. The > way I would > probably do this is to send a event in that action to a > wormhole asking > for the corresponding Name. Managers test to set off alarm bells. The model should describe the behaviour of the system. It doesn't need to include the managers because it knows how it behaves. I also see no reason to have a state model. The only state behaviour seems to be that a list item can be "selected" or "not selected" - this type of state model is not very useful. Finally, if you have a relationship between list items, even if it slightly different to mine, then there is no need to ask another domain: you should simply navigate the relationship! > > It's obvious that there are widely varying opinions. I don't mind > > comparing and contrasting the two (or more!). > > Like anything else is software, there are always several ways to do > something. Much of my disagreement with Whipp stems from a different > viewpoint. I tend to look at an application from an abstract, generic > stance while Whipp has a much more code-centric view, This is odd. I would say precisely the same thing ... only I'd substitue the name (s/Whipp/Lahman/g) ;-) Dave. Subject: Re: (SMU) Need help conceptualizing chris.m.moore@gecm.com writes to shlaer-mellor-users: -------------------------------------------------------------------- > David.Whipp@infineon.com writes to > shlaer-mellor-users: > My relationship is almost the same. I have a > relationship with an assoc object. The relationship is > symmetric-reflexive Mc. The verb phraze is something > like "selection causes highlighting of". This is better but I think your relationships utility is limited while it increases the coupling between the domains. Any changes in dog ownership must be reflected in your "selection causes highlighting of" relationship. -- Chris M. Moore Senior software engineer Alenia Marconi Systems Portsmouth in the UK Subject: Re: (SMU) Need help conceptualizing lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Moore... > > At this point Whipp and I part company because I would simply > > have a relationship between ListItems to reflect >ownership. > > Whip is right when he points out that this is domain pollution. It fails the > domain substitution principle. When you give the GUI domain to your co-worker > to use in another project he's bound to come and ask you what the ownership > relationship between ListItems is. You are correct, as phrased. The 'reflect ownership' was careless. The relationship in the GUI domain really should be something like 'causes highlighting of'. I believe that in this case the relationship really exists in the GUI domain and its nature is quite different than the relationship between a DOG and an OWNER in the application domain. It defines a visible manifestation in the display of a relationship between specific instances of two groups of text strings. Having said that, I am less worried about domain pollution in UI domains. I believe I said elsewhere that I regard them as exceptions because I see them as just smart bridges that are being documented in the OOA. In reality a UI domain is almost always application specific, or, at best, limited to a narrow class of applications (e.g., anything dealing with Pets) -- if it isn't then one has probably built a Window Manager rather than a UI domain. When you are using something like MFC, there is nothing to prevent going directly from the application to the Window Manager domain. However, I don't like doing that, especially when porting to multiple platforms. There are three reasons: (1) The GUI domain prevents domain pollution in the application domain with Window Manager stuff. If one is going to pollute when porting, better to do it in a buffer domain. (2) In our shop there is no separation between architects and analysts. Very egalitarian, but it invites implementation pollution. Since GUIs are notoriously hard to port, I want any such pollution limited to a buffer domain. If the GUI domain is recognized to be a smart bridge, then it focuses the issues and makes things easier to review for implementation pollution. (3) We never know a priori whether we will have to use manual code generation for performance reasons. We don't find out until the application is built and we try it with generated code. If we have to use manual code generation, then I want to isolate the GUI as much as possible to reduce coupling within the application code that we will have to manually maintain going forward. To do that I need the buffer domain represented by GUI. > I think the confusion arises because people confuse the GUI domain with what > they want their GUI to do. My goal *is* to make the GUI reflect what the GUI presents to the user. That is the basis of separating it as a subject matter from the rest of the application. But... > My GUI domain contains Windows, Widgets, Lists, > ListItems etc. and if I'm feeling adventurous Callbacks. I see it as more abstract than this. This sounds to me like the province of a generic Window Manager. In the GUI domain I see the fundamental object being a Screen that is a collection of information that the user views together. The individual items in that collection would ultimately be contained in separate Window, Widget, etc. objects on the Window Manager side of the bridge. Similarly, those individual items might appear in multiple objects on the application side of the bridge (though this is less likely since the application exists to implement the user's specific view). The GUI domain should know nothing about those semantics. > But the instantiation > of the domain at runtime consists of 2 Lists containing ListItems labeled with > owner and dog names. When the user clicks a ListItem containing an owner's name > a callback (another name for a bridge) is invoked which invokes the > dogs_owned_by(owners_name) service which returns a set of Dog objects. Then we > loop through the set of Dog objects and highlight the ListItems in the dog-based > List whose label matches the Dog's name. I am not sure how this is different than what I proposed. > >> context-dependent bridge code > >> that sits between two domain interfaces that are > >> invariant with context. > > Hmm, would the half-tables contain a mapping between Dog's names and ListItem > widget ids? Removes the need to search the List for the right ListItem. Cool. Certainly. My example was the brute force scenario and it didn't address how the various bridge routine implementations were created. There is nothing to prevent using half-tables to generate the B1_D code automatically. In fact, I could see using them to eliminate B1_D completely and replacing the I1 and I2 implementations with more direct accesses of domain internals -- PROVIDED it was sufficiently generic. This is not exact agreement with Whipp because I still want to reduce implementation coupling between the domains so that they can be built once in a particular computing environment. But that does not preclude, say, having the implementations of I1 and I2 in separate .cpp files, generated from half-tables, that are not necessary to build the domain. It is possible to do this using Whipp's approach, but I am skeptical about how generic it would be because inheritance trees tend to be fragile. The danger I see is ongoing tinkering with the computing environment architecture as new applications present new bridging issues. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Need help conceptualizing lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > > At this point Whipp and I part company because I would simply > > have a relationship between ListItems to reflect ownership. > > My relationship is almost the same. I have a relationship with > an assoc object. The relationship is symmetric-reflexive Mc. > The verb phraze is something like "selection causes highlighting > of". True. There was a bit too much implied in 'simply'. The real difference is in the way the bridge is instantiated. I used a bridge to establish the relationship in response to a specific selection by the user (i.e., the relationship exists for the 'current' selection) and changes are bounded by events in the GUI domain. As I understand your proposal, all possible relationships are instantiated when the domain is created and they are modified (added/removed) by changes to the relationships in the application domain via an implicit bridge so that changes are bounded by events in the application domain. Thus a selection event triggers simple relationship navigation rather than a bridge request. > It doesn't reflect ownership, but its a simple relationship. Yes, see my response to Moore. > > Without worrying about which object is active and manages the > > highlighting, let's just assume some object is in charge and > > that it has > > a state action that gets executed when an incoming event from > > the Window > > Manager announces a new selection from one of the lists. The > > way I would > > probably do this is to send a event in that action to a > > wormhole asking > > for the corresponding Name. > > Managers test to set off alarm bells. The model should describe > the behaviour of the system. It doesn't need to include the > managers because it knows how it behaves. I also see no reason > to have a state model. The only state behaviour seems to be > that a list item can be "selected" or "not selected" - this > type of state model is not very useful. Would you be happier if I had said, "...which object has responsibility for highlighting"? I am not talking about creating some Click Manager object. But every GUI domain has to have at least one active object because the interface to the OS Window Manager is inherently event based. If you start having the bridge make decisions about when events get ignored the architecture is usurping elements of the system logic that are properly part of the analysis. [BTW, I have no problem with hiding winproc itself in the architecture and letting it handle mundane things like WM_MOUSE_MOVE so that the OOA only sees the mouse position through attribute values. But I want to be able to see relevant flow of control in the domain. In this simple example it is not obviously important, but GUIs aren't this simple. For example, the GUI very likely has a couple of SAVE buttons that affect what events one wants to accept, ignore, or cache. In that expanded context selection events are likely to be relevant to flow of control and that flow of control is a problem space issue so I want to see it in the OOA.] > > Like anything else is software, there are always several ways to do > > something. Much of my disagreement with Whipp stems from a different > > viewpoint. I tend to look at an application from an abstract, generic > > stance while Whipp has a much more code-centric view, > > This is odd. I would say precisely the same thing ... only I'd > substitue the name (s/Whipp/Lahman/g) ;-) Really? It seems to me that you tend to propose mechanisms for moving stuff into the architecture, while I want to see more in the OOA. The GUI domain itself is an example. I regard is as being a smart bridge, but I still want to see what it does in the in the OOA. You regard it (in this context) as a dumb, pass-through domain w/o flow of control where all the responsibilities should be handled in the architecture via implicit bridges. You see the meta model used by the implementation to be routinely modifiable while I expect architectures to be build-once (i.e., the Analyst's colorization choices are fixed, though widely varied). And I'll give 3:2 odds that you regard colorization as the Architect's job rather than the Analyst's. B-) -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: RE: (SMU) Need help conceptualizing David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- chris.m.moore@gecm.com wrote: > This is better but I think your relationships utility is > limited while it increases the coupling between the domains. > Any changes in dog ownership must be reflected in your > "selection causes highlighting of" relationship. Lets take these 3 things one-by-one: Yes the relationship is fairly limited. Its just a relationship. A complete UI domain would have a lot more in it. The best way to test for coupling/polloution is to consider hypothietical (or real) domain replacements. So lets imagine some scenarios: . See which football team a player plays for. . See which cell a mobile phone is in. . See which boards a power supply is connected to. As you can see, there are a broad range of situations where the UI domain describes appropriate behaviour (Remember, its only the UI semantics, not the display system). Finally, you mention the need for changes in one domain to be reflected in the other. Well, that is implicit in the concept of counterparting. If the same essence is extended to more than one subject matter (as all the above examples are) then they will need to be linked. The linking is described by the definition of the bridge between the domains, not by the domains themselves. The domains maintain their meaning even when separated from each other. Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: RE: (SMU) Need help conceptualizing David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > True. There was a bit too much implied in 'simply'. The > real difference is in the way the bridge is instantiated. > I used a bridge to establish the relationship in response > to a specific selection by the user (i.e., the relationship > exists for the 'current' selection) and changes are bounded > by events in the GUI domain. As I understand your proposal, > all possible relationships are instantiated when the domain > is created and they are modified (added/removed) by changes > to the relationships in the application domain via an > implicit bridge so that changes are bounded by events in the > application domain. Thus a selection event triggers simple > relationship navigation rather than a bridge request. Yuo are correct that, in my model, I had a relationship that described the links, and navigated this local relationship to determine the highlighting. You are also correct that, in my example implementation, I chose to implement the relationship in the UI domain, thus requiring modifications in the APP domain to be synchronously reflected in the UI domain. This is an implementation desision. I could have implemented the same model by querying the application domain whenever a selection was made. It would be more difficult to get all the behaviour correct, but it is possible. > Would you be happier if I had said, "...which object has > responsibility for highlighting"? I am not talking about > creating some Click Manager object. It is better, but I don't really see the need to identify an object with this responsibility. There is an object that knows about highlighting (it has an attribute for it), but it does not have to be responsible for setting/ clearing that attribute. Any object, or SDFD, could do that. > But every GUI domain has to have at least one active object > because the interface to the OS Window Manager is inherently > event based. If that isn't implementation bias, I don't know what is. The decision to give an object a lifecycel should be based on what the object does in the context of its domain, not on the analyst's knowledge (or supposition) of its eventual implementation technology. > If you start having the bridge make decisions about when > events get ignored the architecture is usurping elements > of the system logic that are properly part of the analysis. Bridges don't make decisions: they're dumb. "Panes of Glass" is the quote that comes to mind. > [BTW, I have no problem with hiding winproc itself in the > architecture and letting it handle mundane things like > WM_MOUSE_MOVE so that the OOA only sees the mouse position > through attribute values. Actually, I _do_ have a problem with this: it implies that the attribute has a random value (from the perspective of the domain). > But I want to be able to see relevant flow of control in > the domain. Perhaps we're agreeing? > Really? It seems to me that you tend to propose mechanisms > for moving stuff into the architecture, while I want to see > more in the OOA. The GUI domain itself is an example. I > regard is as being a smart bridge, but I still want > to see what it does in the in the OOA. You regard it (in > this context) as a dumb, pass-through domain w/o flow of > control where all the responsibilities > should be handled in the architecture via implicit bridges. You are only reading half of what I say. Hes, I move many things out of a domain, to be implemented elsewhere. And I also say that the relationship between one domain and another should regularly involve mappings in the meta-model. This can be considered as moving stuff to the architecture, but only if you accept that "the architecture" is every domain in the system except the one that you're currently looking at. The UI domain in the dogs-owners example does, indeed, have no flow of control. At least, that's my analysys. You cannot deduce from this statement that UIs have no flow of control; that their objects have no lifecycles, etc. All you can say is that this example was so trivial that I couldn't find any useful lifecycles. > You see the meta model used by the implementation to be > routinely modifiable while I expect architectures to be > build-once I do not beleive that the OOA meta model is perfect. So I tend to be flexible. But again, you should not deduce that I routinely modify the meta model. Most of the modifications I suggest are applicable to a wide range of problems. For example, my proposal that (M) attributes know their derivation, and not be explicitly calculated in state models, is a consequence of "one fact in one place" > (i.e., the Analyst's colorization choices are fixed, though > widely varied). You'll have to clarify: what is you definition of coloration? I know of 2: Either they are constrains on the model to constrain the OOA meta model to a subset of what's allowed or they are a means by which artifacts in the OOA are explicitly mapped to concepts in an architecture. These two definitions are equivalent, but have different emphesis: The former allows colorations to be independent of a specific architecture, whilst the latter defines the constraints in terms of a specific architecture. > And I'll give 3:2 odds that you regard colorization as > the Architect's job rather than the Analyst's. B-) You'd lose :-). Its neither, and both. Its the job of the system construction team (which is probably composed of architects and analysis). Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: Re: (SMU) Need help conceptualizing lschneid@eng.delcoelect.com (Lee Riemenschneider) writes to shlaer-mellor-users: -------------------------------------------------------------------- >From lahman@atb.teradyne.com Thu Oct 21 15:07:48 1999 >Responding to Riemenschneider... > >I assume there was a reason why you sent this offline(?) Sorry, I hit reply to compose the message, and then I forgot to change the To: I believe I had one other message that went somewhere unintended. Hopefully, I sent this one correctly. :-) > >I believe that the Window Manager domain will almost always be an architectural domain. >In some cases it might quite literally represent the operating system's interface for >graphics. More likely it is something like MFC or one of the commercial window >builders. This is basically a place holder at the Domain Chart level for third party >components and one would not do any analysis on the internals of the domain. > >The Domain Chart's lowest layer provides for the specification of such architectural >domains. To keep that layer abstract so that one doesn't have to re-label it every time >one ports, I tend to use the more generic 'Window Manager' for this sort of thing. The >main purpose, insofar as the OOA is concerned, is to define the bridge to it from domains >that _are_ part of the application (i.e., GUI). One needs to do this to clearly define >the subject matter, responsibilities, and level of abstraction of the client application >domain where you actually will do internal analysis. That is, defining that bridge >tells you what isn't to be modeled in the domain. > OK. Thanks for the clarification. Subject: Re: (SMU) Need help conceptualizing lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > You are also correct that, in my example implementation, > I chose to implement the relationship in the UI domain, > thus requiring modifications in the APP domain to be > synchronously reflected in the UI domain. This is an > implementation desision. I could have implemented the > same model by querying the application domain whenever > a selection was made. It would be more difficult to > get all the behaviour correct, but it is possible. Interesting. I would speculate that it would be easier to ensure correct behavior by querying on the selection, especially in a distributed system. But I don't think it is worth worrying about in this context, so let's not go there unless you're really bored. > > Would you be happier if I had said, "...which object has > > responsibility for highlighting"? I am not talking about > > creating some Click Manager object. > > It is better, but I don't really see the need to identify > an object with this responsibility. There is an object > that knows about highlighting (it has an attribute for > it), but it does not have to be responsible for setting/ > clearing that attribute. Any object, or SDFD, could do > that. Here is another example of our differing view of the realms of architecture and analysis. To me the responsibility of highlighting is clearly an analysis issue. The Window Manager knows nothing about such semantics; it just announces a mouse click. The application shouldn't know about the semantics of GUI mechanisms (i.e., highlighting) either. But that is what the GUI domain lives for. There are decisions to be made (e.g., is anything highlighted when the screen is initially displayed or given focus?). I would never dream of burying the sort of logic necessary for those decisions in the architecture because I see it (highlighting) as fundamental to the subject matter of the GUI domain. Thus turning it on and off would be natural for somebody's life cycle. > > But every GUI domain has to have at least one active object > > because the interface to the OS Window Manager is inherently > > event based. > > If that isn't implementation bias, I don't know what is. > The decision to give an object a lifecycel should be based > on what the object does in the context of its domain, not > on the analyst's knowledge (or supposition) of its eventual > implementation technology. Let me see if I understand this. We are using a methodology that prohibits the expression of significant functionality affecting flow of control in anything except state machines. Every modern window operating system I know of is event based. The primary mission of this domain is to support communications between the application and the OS. The DC description of the bridge to the Window Manager will almost certainly be very heavy on events. But using state machines in the GUI domain to handle functionality is implementation pollution? B-) Actually, I will grant that the domain does not _have_ to have an active object. But I would argue that modeling anything else really would be highly unnatural for the subject matter. This is especially true given my view that the domain is really a glorified bridge whose logic needs to be exposed in the OOA. > > If you start having the bridge make decisions about when > > events get ignored the architecture is usurping elements > > of the system logic that are properly part of the analysis. > > Bridges don't make decisions: they're dumb. "Panes of Glass" > is the quote that comes to mind. Exactly my point. You can hide Window Manager events as magical attribute updates, but the Analyst still has to understand when and how that happens. I just don't want the Analyst to have to look in the architecture for that information ("Aha! The boolean attribute will be set when WM_xxx is received. Now given the semantics of the GUI, that means that I have to worry about that only when..."). To me such updates are an analysis issue that should be exposed in the OOA as external events. > > But I want to be able to see relevant flow of control in > > the domain. > > Perhaps we're agreeing? Surely you jest. B-) > > Really? It seems to me that you tend to propose mechanisms > > for moving stuff into the architecture, while I want to see > > more in the OOA. The GUI domain itself is an example. I > > regard is as being a smart bridge, but I still want > > to see what it does in the in the OOA. You regard it (in > > this context) as a dumb, pass-through domain w/o flow of > > control where all the responsibilities > > should be handled in the architecture via implicit bridges. > > You are only reading half of what I say. > > Hes, I move many things out of a domain, to be implemented > elsewhere. And I also say that the relationship between > one domain and another should regularly involve mappings > in the meta-model. This can be considered as moving stuff > to the architecture, but only if you accept that "the > architecture" is every domain in the system except the one > that you're currently looking at. What is missing from both domains is the asynchronous logic of the relationship update. If you use implicit bridges there is no clue in the application domain that instantiating a new dog/owner instance relationship will have an immediate affect in the GUI. Similarly, there is no clue in the GUI domain that a relationship between ListItems is being changed asynchronously by some activity in an application domain. On the application side, who cares? But on the GUI side I think it is an analysis issue so a wormhole is required to capture that fact explicitly. > The UI domain in the dogs-owners example does, indeed, have > no flow of control. At least, that's my analysys. You cannot > deduce from this statement that UIs have no flow of control; > that their objects have no lifecycles, etc. All you can say > is that this example was so trivial that I couldn't find any > useful lifecycles. I think my issue is more fundamental than that. Counterpart updating is done via bridges. As such that updating is part of the architecture. Moreover, from the target domain's viewpoint it is inherently asynchronous processing. My assertion is that when you use counterpart updating in the architecture, you are hiding asynchronous processing from the analyst that may affect the domain's flow of control. To determine whether it really does or not the analyst must (a) know that it is happening and (b) look in the architecture to understand it. I feel that explicit wormholes in the OOA provides that explicit information. > I do not beleive that the OOA meta model is perfect. So I > tend to be flexible. But again, you should not deduce > that I routinely modify the meta model. Most of the > modifications I suggest are applicable to a wide range of > problems. For example, my proposal that (M) attributes know > their derivation, and not be explicitly calculated in state > models, is a consequence of "one fact in one place" I was thinking here more about your inheritance proposal for the bridge. I agree the (M) attribute solution was quite general. The inheritance approach would require a lot more convincing, though. B-) Saying that inheriting from common objects is a valid bridge mechanism for relationship updating is a lot more vague than your SDFD proposal that carried with it the SDFD formalism. > > (i.e., the Analyst's colorization choices are fixed, though > > widely varied). > > You'll have to clarify: what is you definition of coloration? > > I know of 2: Either they are constrains on the model to > constrain the OOA meta model to a subset of what's allowed > or they are a means by which artifacts in the OOA are > explicitly mapped to concepts in an architecture. > > These two definitions are equivalent, but have different > emphesis: The former allows colorations to be independent > of a specific architecture, whilst the latter defines the > constraints in terms of a specific architecture. I don't think it makes a difference. But you'll have to clarify: what's your definition of a meta model. B-) [You probably did before, but my attention span is limited nowadays and a lot of short term memory synapses are being randomly recycled.] As you have been describing it I have the impression that it is a superset of the OOA-of-OOA and the OOA-of-Architecture, albeit a generic Architecture. If so, then coloration of the meta model is *implementation* independent rather than architecture independent. But whose quibbling? > > And I'll give 3:2 odds that you regard colorization as > > the Architect's job rather than the Analyst's. B-) > > You'd lose :-). Its neither, and both. Its the job of > the system construction team (which is probably composed > of architects and analysis). Fair enough. I lean towards it being the analyst's job, but a team effort is OK, too. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: RE: (SMU) Need help conceptualizing David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- > Interesting. I would speculate that it would be easier to ensure > correct behavior by querying on the selection, especially in a > distributed system. But I don't think it is worth worrying about in > this context, so let's not go there unless you're really bored. I'll keep it simple then: the important thing is that is the ownership of a dog changes while its owner is selected, then the display must change. So a message has to go from the application to the UI (unless the UI polls in a tight loop!) That message might contain data, or the UI might query for the data when it get the update signal. > Here is another example of our differing view of the realms of > architecture and analysis. To me the responsibility of > highlighting is clearly an analysis issue. The Window > Manager knows nothing about such semantics; it just announces > a mouse click. The application shouldn't know about the > semantics of GUI mechanisms (i.e., highlighting) either. I think I didn't make my analysis clear enough when I sent the code. My model used my automagic DFD thing. So the sequence of events for highlighting is: A Mouse click interpretted in windowing system to know which component to send message to. That component is the windowing system's counterpart of the Dog/Owner/listItem - perhaps a text_field. That specific component sends a message to the UI domain. The bridge into the UI domain translates the message and activates the appropriate SDFD. That SDFD simply sets the "is_selected" attribute on the appropriate ListItem instance. This is where the non-standard DFD kicks in. The derivation of "is_highlighted" attributes is dependent on the related "is_selected" attributes. So the appropriate "is_highlighted" attributes get refreshed. ('appropriate' means that they can be reached by following the reflexive relationship. A dumb implementation might update them all). At this point, the bridge from the UI to the windowing system notices that an "is_highlighted" attribute has changed. So it sends a message to the windowing system. ('notices' probably means that its an observer). No event in the UI is needed because my bridges have a white-box view for read-only accesses to attributes. Finally, the windowing system triggers the paint() method, or whatever else is needed. In this entire process, the application never knows that selection or highlighting has taken place. Its simply not relevant. If it had been relevant, then one of its bridges would have told it (because we'd have built the system to include such a bridge). > But that is what the GUI domain lives for. There are > decisions to be made (e.g., is anything highlighted > when the screen is initially displayed or given focus?). > I would never dream of burying the sort of logic > necessary for those decisions in the architecture > because I see it (highlighting) as fundamental to the > subject matter of the GUI domain. Thus turning it on > and off would be natural for somebody's life cycle. Hopefully you can see that my view is identical. The decision occurs in the UI domain (note, however, that I don't call it a GUI: we don't _need_ graphics for this behaviour. Its simply the UI). > > If that isn't implementation bias, I don't know what is. > > The decision to give an object a lifecycle should be based > > on what the object does in the context of its domain, not > > on the analyst's knowledge (or supposition) of its eventual > > implementation technology. > > Let me see if I understand this. [windowing systems use > events] But using state machines in the GUI domain to handle > functionality is implementation pollution? B-) No, but using state machines ***BECAUSE*** the implementation does is implementation pollution. > Actually, I will grant that the domain does not _have_ > to have an active object. But I would argue that > modeling anything else really would be highly unnatural > for the subject matter. Actually, users don't really like stateful behaviour. Well, novices might, but power users don't want to click though 7 menues and 5 dialoges to do something simple. Similarly, if I change something, then I expect to see the change -- I don't want to press a refresh button. If you think in terms of state machines, then its easy to forget to send all the events needed to keep everything in step. I find it far easier to push that sort of thing into the architecture. Some state behaviour is necessary, but not as much as is often found. > Exactly my point. You can hide Window Manager events as magical > attribute updates, but the Analyst still has to understand > when and how that happens. I just don't want the Analyst to > have to look in the architecture for that information ("Aha! > The boolean attribute will be set when WM_xxx is received. Now > given the semantics of the GUI, that means that I have to worry > about that only when..."). To me such updates are an analysis > issue that should be exposed in the OOA as external events. I think the bit that you missed is that my update of "is_highlighted" is triggered by (amonst other things) an update to "is_selected". The fact that WM_xxx is mapped to the setting of "is_selected" is defined in the bridge (The mapping can be to an SDFD that then sets is_selected) > What is missing from both domains is the asynchronous logic of the > relationship update. If you use implicit bridges there is no clue in > the application domain that instantiating a new dog/owner instance > relationship will have an immediate affect in the GUI. But the application domain doesn't (shouldn't) know that its connected to a GUI. So why should it know that its changes are (or aren't) immediately visible to a user. > Similarly, there is no clue in the GUI domain that a > relationship between ListItems is being changed > asynchronously by some activity in an application domain. > On the application side, who cares? But on the GUI side I > think it is an analysis issue so a wormhole is required to > capture that fact explicitly. I can agree that the GUI might, potentially, want to know when is_selected is set. If so, then you need to create an SDFD to handle the incoming signal. But I get bored creating SDFDs that do nothing more than set a single attribute. So I cheat and label the attribute "(W)" to indicate that it is updated asynchronously (via a Wormhole). This is similar to marking an attribute with "(M)". But I always remember that it's simply a syntactic shortcut. If I need the SDFD to do something more (say, to generate an event in addition to setting the attribute) then the attribute loses its tag and I create an SDFD. (And then change the bridge to use that SDFD). > I was thinking here more about your inheritance proposal > for the bridge. I never proposed using inheritance to specify a bridge. My use of inheritance is an implementation technique. > I don't think it makes a difference. But you'll have to > clarify: what's your definition of a meta model. The OOA-of-OOA is the meta-model that defines what an OOA model means. Other modeling formalisms are their own meta models. > > Its the job of the system construction team (which > > is probably composed of architects and analysis). > > Fair enough. I lean towards it being the analyst's job, > but a team effort is OK, too. It might not be a team, but it is a third hat that is worn during development: you have application people, architecture people and system people. One person can wear different hats at different times. Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: (SMU) Re: Why should I? Chris Curtis writes to shlaer-mellor-users: -------------------------------------------------------------------- To all who contributed to answering my question, thank you. It's been a very educational process... they wanted to see what all that "analysis and documentation junk" did for coding, so the goal was to have that transaction piece coded this week. Wonder of wonders, miracle of miracles, they discovered that it actually doesn't work without a User Requirements spec up front, followed by a complete DC, etc... Turns out that a rigorous methodology does actually enforce engineering discipline. :-) As a side benefit, I'm starting to feel like I just might get the hang of this analysis stuff. Every now and then I catch one of my own dumb mistakes. ------------------------------ Chris Curtis Systems Engineer Satel Corporation "Where am I, and what am I doing in this handbasket?" Subject: (SMU) Synchronous services? Chris Curtis writes to shlaer-mellor-users: -------------------------------------------------------------------- I'll stick with my transaction example, because it's one I've explored in detail the last couple of weeks... Let's say I've modeled a Batch which controls one or more Transactions. The Batch receives an event 'Post Batch' which moves it into a Posting state. It now has to 'post' (apply) each Transaction related to it, but cannot itself move to the 'Posted' state until all the related Transactions are themselves Posted. I'm struggling with how to model this... it seems like it has to be a synchronous service 'post()' of Transaction, because the Batch instance has to make sure each Transaction gets posted. The only way I see it can do that is to loop through all the instances of Transaction to which it is related ... but if it sends an asynchronous event to a Transaction instance, it can't receive another event (like 'TRANS:DonePosting') until it has finished looping through. End result: it can't keep track of all Transaction instances it is supposed to be 'post'ing. Am I missing something here? ------------------------------ Chris Curtis Systems Engineer Satel Corporation "Where am I, and what am I doing in this handbasket?" Subject: Re: (SMU) Synchronous services? chris.m.moore@gecm.com writes to shlaer-mellor-users: -------------------------------------------------------------------- > Chris Curtis writes to shlaer-mellor-users: > I'll stick with my transaction example, because it's one I've explored in detail the last couple of weeks... > > Let's say I've modeled a Batch which controls one or more Transactions. The Batch receives an event 'Post Batch' which moves it into a Posting state. It now has to 'post' (apply) each Transaction related to it, but cannot itself move to the 'Posted' state until all the related Transactions are themselves Posted. > > I'm struggling with how to model this... it seems like it has to be a synchronous service 'post()' of Transaction, because the Batch instance has to make sure each Transaction gets posted. The only way I see it can do that is to loop through all the instances of Transaction to which it is related ... but if it sends an asynchronous event to a Transaction instance, it can't receive another event (like 'TRANS:DonePosting') until it has finished looping through. End result: it can't keep track of all Transaction instances it is supposed to be 'post'ing. > > Am I missing something here? Every object has a lifecycle including Transaction. Have the Batch hold the number of post events it sends in an attribute. Then have the Transaction send an event back to the Batch when it has finished. The Batch has a state with a reflexive transition where it decrements the attribute until it reaches zero whereupon it generates an event to itself to indicate that the batch has been processed. I think this question cuts to the heart of SMs weakness. The requirement for asynchronous communication is tiresome and worse misleading (when the architecture is synchronous). -- Chris M. Moore (chris.m.moore@gecm.com) Senior software engineer, Alenia Marconi Systems, Portsmouth in the UK Subject: Re: (SMU) Synchronous services? Gregg Kearnan writes to shlaer-mellor-users: -------------------------------------------------------------------- chris.m.moore@gecm.com wrote: > > Every object has a lifecycle including Transaction. Have the Batch hold the > number of post events it sends in an attribute. Then have the Transaction send > an event back to the Batch when it has finished. The Batch has a state with a > reflexive transition where it decrements the attribute until it reaches zero > whereupon it generates an event to itself to indicate that the batch has been > processed. For some reason, counting events seems to be a bit of an implementation, rather than analysis. Not sure why it seems this way...maybe because it is treating the events as if they were objects within the analysis. Here is another possibility. Each transaction object, apon completing its lifecycle, may not be required and can delete itself. Before deleting (essentially reaching the end of its terminal state), it can check to see if it is the only transaction related to the batch, and if so, send an event to the batch to notify it that all transactions have been completed. It can then unrelate itself from the batch, and die. If this doesn't work, because the transaction must stay around for later processing (I don't know this subject matter, so I'm guessing), maybe another possibity is to subtype the transaction, reflecting the states of the transaction (I recall this from Leon Starr's book). Transaction <---is a ---> Processed transaction * id - other info <---is a ---> Unprocessed transaction - type Now, the Batch could send the event to each Unprocessed Transaction instance it is related to, and that instance could migrate to a Processed Transaction subtype when processing is completed. After migrating, it could check to see how many Unprocessed Transactions the batch is related to, and, if the batch is no longer related to any Unprocessed transactions, notify the Batch instance that all transactions are complete, if this is the case. It occurs to me that under the simultaneous concept of time, there may be some race conditions if the last two or more transactions were completing their processing at the same time. They may check to see how many Unprocessed transactions there are, and happen to get a count that changes immediately after it was checked. Any ideas on how to fix this problem? I'm trying to avoid having each transaction send back a "transaction complete" event. > > I think this question cuts to the heart of SMs weakness. The requirement for > asynchronous communication is tiresome and worse misleading (when the > architecture is synchronous). The only time I think it gets difficult to implement a syncronous architecture is when you send an event to an object and expect a reply. If you implement the event generation as an actual call the the state of the receiving object, you run into the problem that the sending object can't receive any events (under OOA rules) until it has completed processing of the previous state. If the receiving object then tried to place a syncronous call back to the sending object, the rules would be violated. This problem gets really ugly if more than two objects are involved in a thread of control. Has anyone worked out this issue? I've given it only cursory thought, since our architecture is async. Subject: Re: (SMU) Need help conceptualizing chris.m.moore@gecm.com writes to shlaer-mellor-users: -------------------------------------------------------------------- > The best way to test for coupling/polloution is to > consider hypothietical (or real) domain replacements. > So lets imagine some scenarios: > > . See which football team a player plays for. > . See which cell a mobile phone is in. > . See which boards a power supply is connected to. But these would all the modeled in the application domain. > As you can see, there are a broad range of situations > where the UI domain describes appropriate behaviour > (Remember, its only the UI semantics, not the display > system). To me the apps UI is defined by the initial set of instances of objects in the UI domain. It has no notion of application semantics. Of course, this means that requirements are often satisfied by bridges but it certainly makes requirements tracing easier. > Finally, you mention the need for changes in one > domain to be reflected in the other. Well, that is > implicit in the concept of counterparting. I have no problems with trains/owners having counterpart icons/ListItems in the UI domain. Maybe it's my strict Ada upbringing but the reflexive in your UI domain would allow selection of one owner's ListItem to highlight another owner's ListItem. You could subtype ListItem into LHSListItem, RHSListItem etc and have a 1-M between them but I think embedding ListBoxes in PanedWindows is simpler. > If the same > essence is extended to more than one subject matter > (as all the above examples are) then they will need to > be linked. > The linking is described by the definition of the > bridge between the domains, not by the domains > themselves. The domains maintain their meaning even > when separated from each other. Agreed. -- Chris M. Moore (chris.m.moore@gecm.com) Senior software engineer, Alenia Marconi Systems, Portsmouth in the UK Subject: RE: (SMU) Synchronous services? "Lynch, Chris D. SDX" writes to shlaer-mellor-users: -------------------------------------------------------------------- >Chris Curtis writes to shlaer-mellor-users: >-------------------------------------------------------------------- > > >I'll stick with my transaction example, because it's one I've explored in detail the last couple of weeks... > >Let's say I've modeled a Batch which controls one or more Transactions. The Batch receives an event >'Post Batch' which moves it into a Posting state. It now has to 'post' (apply) each Transaction related >to it, but cannot itself move to the 'Posted' state until all the related Transactions are themselves >Posted. >I'm struggling with how to model this... it seems like it has to be a synchronous service 'post()' of >Transaction, because the Batch instance has to make sure each Transaction gets posted. The only >way I see it can do that is to loop through all the instances of Transaction to which it is related ... but if >it sends an asynchronous event to a Transaction instance, it can't receive another event (like >'TRANS:DonePosting') until it has finished looping through. End result: it can't keep track of all >Transaction instances it is supposed to be 'post'ing. Have you explored the path where "Transactions" are actually the events in the lifecycle of a "Customer" object, "Account" object, "BillableService" object, etc? In this case Transaction and Batch are not application objects at all, but implementation artifacts. Another possibility is to model Transaction and Batch as passive (i.e., without a lifecycle) objects, and have the Customer, Account, etc. objects receive events designating Transactions to be processed (e.g., Customer12: PostDeposit(TransactionID) ). I believe this is akin to the movement request in PT's disk-management jukebox example. IMHO, "Transactions" is a convenient word which allows you to avoid talking about the data which make up the transaction. As such, it is a concept in your _design_ toolbox. _Analysis_ (of the application domain) must examine the data inside and give it meaning. Hope this helps, Chris --------------------------------------------------- Chris Lynch Abbott AIS San Diego CA lynchcd@hpd.abbott.com "If you're as clever as you can be when you design it, how will you ever debug it?" Kernighan and Plauger --------------------------------------------------- Subject: Re: (SMU) Synchronous services? lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Curtis... > Let's say I've modeled a Batch which controls one or more Transactions. The Batch receives an event 'Post Batch' which moves it into a Posting state. It now has to 'post' (apply) each Transaction related to it, but cannot itself move to the 'Posted' state until all the related Transactions are themselves Posted. A quibble here that reflects philosophy. Rather than 'Post Batch' I would prefer something less imperative. Ideally this event was generated somewhere else when some activity was completed. The event should announce the fact that the activity was completed (e.g., 'Batch Verification Completed'). If the event is external to the domain it may simple announce a status (e.g., 'Ready To Post') because it may be none of this domain's business how the outside world got to that status. [One can argue that external input events may be requests for services by a client and, therefore, it is not inconsistent to use an imperative for them.] The Deep Philosophical Purpose behind this is to keep state machines self contained. State machines are not supposed to know about context or history. A state machine should not know what is supposed to happen after it generates an event that is not self-directed. It does its thing and what follows is not its business. As soon as you use an imperative it implies that the action generating the event knows what somebody else should be doing. It is actually kind of tricky to write state machines that are truly self contained. One tends to always have one eye on the overall processing in the domain. Therefore it is good practice to avoid the imperatives. Forming an event name that is an announcement tends to force one to focus on the state machine in hand rather than the context. > I'm struggling with how to model this... it seems like it has to be a synchronous service 'post()' of Transaction, because the Batch instance has to make sure each Transaction gets posted. The only way I see it can do that is to loop through all the instances of Transaction to which it is related ... but if it sends an asynchronous event to a Transaction instance, it can't receive another event (like 'TRANS:DonePosting') until it has finished looping through. End result: it can't keep track of all Transaction instances it is supposed to be 'post'ing. How you handle this depends upon how much you want to model of the posting safeguards. For example, if the 13th Transaction's posting fails, do you have to back out the previous 12? You probably do, but that may not be the current domain's responsibility. Let's walk through some possibilities... Let's assume you are using a commercial RDB that routinely handles this idea of posting batched updates so that the database will be unmodified if one of them fails (i.e., the RDB has the notion of a transaction around the updates that is not committed until they have all been successfully processed). Since it is 3rd party software, it is in another domain. One way to deal with this is for Batch to have an action that does three things: - it issues a bridge request to the RDB to open a database transaction - it issues a bridge request to the RDB to post each Transaction (assuming Transaction is dumb and the event data is extracted from the Transaction attributes) - it issues a bridge request to the RDB to close the database transaction The bridge will generate an event to Batch when the RDB acknowledges that the posting was committed. That event transitions Batch to whatever state deals with a completed posting. Piece of cake, right? [At this point Dave Whipp's Warning Klaxon is already screaming, but let's travel this path a bit further...] Alas, life is rarely so simple and you have to deal with errors. At this level of abstraction you have a problem if the RDB bridge produces an event if an individual Transaction posting fails because there may be a bunch of bridge requests pending (i.e., the failure event returns asynchronously while the action is still pumping out posting requests). To deal with this explicitly in the OOA you might have to have a loop through state transitions in the Batch state machine so that each posting request is acknowledged individually. You can do this with a state that looks like (hope your mailer deals with fixed fonts and 60-character lines): +-------------+ Error Raised Batch | process |-------------> to error processing action ---------->| Transaction | Verified | |------+ +-------------+ | | ^ | Post Successful | | | | +--------------+ Posting Done | | v to post-posting activity 'Batch Verified' initiates the posting operation. 'Error Raised' and 'Post Successful' would be generated by the bridge. 'Posting Done' would be generated by the action when there were no more Transactions to post (i.e., posting the batch is completed). At this point an battery of your warning lights should all be glowing. The clue is that the processing of each individual Transaction posting is identical, down to the handling of an error; nothing depends upon the contents of Transaction. That is, processing is algorithmic and no decisions made during that processing affect flow of control, other than *an* error occurred that prevented the batch from being posted. This tells me that my level of abstraction is not correct for the domain. What this domain wants to do is to post a group of transactions. Given the assumptions above, the RDB can take care of backing off postings of prior updates in the group if one fails -- that's why you bought the RDB. So you probably don't need anything elaborate (i.e., backing out Transactions) for the 'Error Raised' bridge event. The mechanism, opening and closing an RDB update transaction around the posting, is really an implementation issue for the particular RDB. In this case it is unlikely that the domain really needs to know about those details -- its service request to the bridge is, "Please post this flock of Transaction data and let me know whether you were successful". Taking this view the domain gets simpler and the flow of control (e.g., what to do if a posting error occurs) is appropriate to the fact (from prior messages) that Batch is probably the only active object in the domain. That is, you don't really care about the routine processing of individual Transactions; you care about the processing of batches. Now your OOA for doing the posting operation is simpler: You have a Batch action that formats the data packet by navigating the Transaction relationships and ships out an event, 'Transaction Packaged', that you assign to the RDB bridge. The bridge will generate an event, either 'Posting Error Raised' or 'Posting Completed', into the domain to continue processing. What could be simpler? Alas, there is still an interesting question: how to you package the event data you extracted from the individual transactions? I would argue that you are not concerned with that in this domain -- it is strictly an algorithmic protocol whose efficiency depends on what the resulting RDB update transactions look like. Since that is 'implementation' relative to this domain, you don't want to see the details in the Batch action for this. Entering Stage Left Smiling we have: Transform. I would probably invoke a transform that took the set of attributes for each Transaction (actually a set of individual Transactions' set of attributes), packaged them into some collection, and returned a transient variable that I would ship off as the data in the event packet. In this domain I would conceptually think of that transient variable as a handle. If you visualize an ADFD for this the action is pretty simple -- just a bunch of relationship navigations to create the Transaction attribute data flow into the transform and the flow out of the transform with the transient value to the event generator. At this level of abstraction it describes everything you need to know about the processing. The transform would really be a synchronous bridge to a low level implementation or architectural domain that contained the realized, algorithmic code for formatting the data packet in a manner that the bridge can easily and efficiently distribute among a suite of RDB truncations (e.g., SQL calls or native RDB API calls). That domain might be a simple shell around the RDB. So what happens if you don't have an RDB that does all the two phase commit stuff for groups of transactions? I would argue that you don't care in this domain -- it would look exactly the same because you already have the correct level of abstraction for the subject matter. The problem that you do have is to create some low level implementation domain that *does* emulate the two phase commit manipulations for whatever you are using for persistence. That domain doesn't care about the semantics of Batch or Transaction; it just manipulates data updates as set of values with a predefined mapping (e.g., to table rows, etc.) and it handles things at the same level of abstraction as a commercial database engine. When you model it, you will be in the database engine business so the notion of 'transaction' will be different than in the higher level application domain containing the Batch and Transaction objects. So when might the domain with Batch get more complicated? It will get more complicated if the outcome of processing of an individual Transaction is important. Suppose the notion of posting in the application does not require all-or-none posting. Instead you only need to know when all of the transactions have been posted, but it may take several, incremental tries to do so. Now if one transaction fails you have to keep track of which are actually posted and which aren't and eventually you will have to recycle through the posting operation to get the rest posted. One way to do this would be to let the bridge update a flag attribute in Transaction to indicate successful posting and increment a count attribute in Batch so you know when all the transactions have been posted. Aside from a couple of IFs in appropriate actions, everything would look pretty much the same. This is fairly elegant, but the flow of control is not so obvious. A second way would be to essentially revert to my state diagram snippet above and modify it so you could set the flag and count attributes whenever 'Post Successful' was encountered. It is a bit more complicated (probably an extra state) because you don't want to update attributes at the 'Batch Verified' transition, but there is no doubt about what is driving the flow of control. In either case the domain will have to have some additional logic to repeat the posting operation if not all of the Transactions for a Batch have been posted, but that should not be very tough to do. I prefer the second option. The problem I see with the first option is that without understanding how Batch's count attribute gets updated, there is a possibility that it could be incremented to completion between the time that I check the count and the time that I launch another pass through the posting operation. This could lead to requesting a redundant Transaction posting. This is easily fixed in the bridge by always posting the flag attribute in the Transaction before incrementing the count attribute in Batch. Then the worst that happens is an empty posting operation because there will be no unposted Transactions. The problem is that I have to look in the bridge to determine whether this operation is safe. In the second option safety can be verified by pushing pennies around the state machines alone. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Need help conceptualizing lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > > Here is another example of our differing view of the realms of > > architecture and analysis. To me the responsibility of > > highlighting is clearly an analysis issue. The Window > > Manager knows nothing about such semantics; it just announces > > a mouse click. The application shouldn't know about the > > semantics of GUI mechanisms (i.e., highlighting) either. > > > > In this entire process, the application never knows > that selection or highlighting has taken place. Its > simply not relevant. If it had been relevant, then > one of its bridges would have told it (because we'd > have built the system to include such a bridge). Actually, that was pretty much what I though you were doing. [I was confused awhile back when you said an SDFD wasn't necessary, but that's not relevant now.] I still stick to the opinion that highlighting for a GUI domain is an important aspect of what a GUI does and how the user interacts with it. Therefore, it is relevant to the model. What your analysis indicates is that it is not relevant if you have automagic DFDs available to supplement the OOA. If you regard those DFDs as part of the OOA (i.e., the Analyst has to create them), then I could buy hiding the highlighting, but not otherwise. > > Actually, I will grant that the domain does not _have_ > > to have an active object. But I would argue that > > modeling anything else really would be highly unnatural > > for the subject matter. > > Actually, users don't really like stateful behaviour. Well, > novices might, but power users don't want to click though 7 > menues and 5 dialoges to do something simple. Similarly, > if I change something, then I expect to see the change -- I > don't want to press a refresh button. Au contraire. The user is not thinking in terms of state machines while the analyst is. The analyst has to have some notion of the way GUIs work to define the bridge descriptions in the DC. The user may have no idea what the notion of current focus means, but the analyst must to model this domain. These are concepts that apply to all GUIs, so they aren't implementation dependent. Given all that sort of stuff it seems very reasonable to me that an analyst using S-M would naturally describe the domain behavior in terms of events. > If you think in terms of state machines, then its easy > to forget to send all the events needed to keep everything > in step. I find it far easier to push that sort of thing > into the architecture. I agree with this idea. I also agree that a typical Window Manager produces a lot of irrelevant stuff at this domain's level of abstraction. I have been using the WM_ identifier because the intent is less ambiguous. I would actually expect the bridge and the architecture to feed the domain something other than literal WM_xxx events. Those messages would be more abstract and would identify only those user activities that are relevant at the GUI domain's level of abstraction. We just seem to differ about how abstract they should be. I think that one decides what level of messages are appropriate in the domain for expressing the user interactions first. One then make an OOA that is correct for that level of messages. One moves everything else into the bridge. But if one still has to generate extra messages to keep things in step at the GUI domain's level of abstraction, then Tough Twinkies -- that's life in modeling. > > Exactly my point. You can hide Window Manager events as magical > > attribute updates, but the Analyst still has to understand > > when and how that happens. I just don't want the Analyst to > > have to look in the architecture for that information ("Aha! > > The boolean attribute will be set when WM_xxx is received. Now > > given the semantics of the GUI, that means that I have to worry > > about that only when..."). To me such updates are an analysis > > issue that should be exposed in the OOA as external events. > > I think the bit that you missed is that my update of > "is_highlighted" is triggered by (amonst other things) an > update to "is_selected". The fact that WM_xxx is mapped > to the setting of "is_selected" is defined in the bridge > (The mapping can be to an SDFD that then sets is_selected) It is not the mapping of 'is_selected' that I am worried about. It is the relationship between ListItems in different lists where ListItemLink is the associative object. That update (e.g., deleting an instantiation of ListItemLink when a dog dies in the application domain) is the one that is asynchronous and hidden. The analyst sees nothing in the domain to indicate in the relationship between an entry in the list representing owners and the entries in the list representing dogs to prevent navigating those relationships. Therefore the analyst might well initiate some processing to navigate those relationships while they were being updated. Crash city. To know whether navigation is safe the analyst needs to know about those asynchronous updates in the OOA. > I can agree that the GUI might, potentially, want to know > when is_selected is set. If so, then you need to create > an SDFD to handle the incoming signal. But I get bored > creating SDFDs that do nothing more than set a single > attribute. So I cheat and label the attribute "(W)" to > indicate that it is updated asynchronously (via a > Wormhole). Geesh! We have gone around about this for a gazillion messages to get here?? The '(W)' is what I want to see in the OOA. I just want something explicit in the OOA that rings warning bells for the analyst performing maintenance that there might be a pitfall. I would prefer it if the automagic DFDs were part of the OOA as well, but this is sufficient for attribute updates. Next question: what do you use to indicate that relationships and instances are being instantiated/removed in GUI on the fly as a result of activities in another appliation domain? > > I was thinking here more about your inheritance proposal > > for the bridge. > > I never proposed using inheritance to specify a bridge. > My use of inheritance is an implementation technique. Say what?!? To eliminate the bridge code between GUI and the rest of the application you introduced the notion of ListItem, etc. in the two domains inheriting from common objects. What else was SM_Object, etc. about? -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: RE: (SMU) Need help conceptualizing David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Good, we seem to be coming round to the opinion that we don't really disagree at all. Its just a communication problem. lahman wrote: > I still stick to the opinion that highlighting for a GUI > domain is an important aspect of what a GUI does and how > the user interacts with it. Therefore, it is relevant to > the model. What your analysis indicates is that it is > not relevant if you have automagic DFDs available to > supplement the OOA. If you regard those DFDs as part of > the OOA (i.e., the Analyst has to create them), then I > could buy hiding the highlighting, but not otherwise. I do regard the update formula for a (M) attribute to be part of the OOA. (I hope there is no disagrement that is_highlighed is (M)). I do consider my DFD to be part of _my_ model. But its not part of an SM-OOA (because OOA96 is very vague about how (M) attributes work). > The analyst sees nothing in the domain to indicate in the > relationship between an entry in the list representing > owners and the entries in the list representing dogs to > prevent navigating those relationships. Therefore the > analyst might well initiate some processing to navigate > those relationships while they were being updated. Crash > city. To know whether navigation is safe the analyst > needs to know about those asynchronous updates in the OOA. You can always feed the object create/delete messages through an SDFD. Again, I get bored writing 1-process SDFDs. A (W) on an identifier seems appropriate to indicate that an object can pop into, and out of, existance at any time. [aside: putting it on the attribute help keep things symmetric. It doesn't take a big leap to start thinking about putting (M) on identifiers: this allows my domain-DFD to create/delete derived objects whilst dealing only with attributes (which keeps the meta-model simple).] > Geesh! We have gone around about this for a gazillion > messages to get here?? Thats what email debates are for :-) > The '(W)' is what I want to see > in the OOA. I just want something explicit in the OOA > that rings warning bells for the analyst performing > maintenance that there might be a pitfall. I would > prefer it if the automagic DFDs were part of the OOA > as well, but this is sufficient for attribute updates. So it seems we happy now. Attributes can be tagged with (M) or (W). The update behavior of a (M) is defined by an update mechanism, which ought to be part of the OOA (i.e. my DFD, or similar). The behaviour of a (W) is random [but legal] for a domain until it is controlled by a bridge. It should be thought of as a shortcut that avoids tedious SDFDs: a (W) can _always_ be replaced by an explicit SDFD. (It saves ink and whiteboard space). > Next question: what do you use to indicate that > relationships and instances are being instantiated/ > removed in GUI on the fly as a result of activities > in another appliation domain? Hopefully you can guess that by now. The client domain knows nothing about it. The server domain either has SDFDs or has some (W) identifiers/attributes. Only the bridge knows that the change in one domain is related to a change in the other. > > I never proposed using inheritance to specify a bridge. > > My use of inheritance is an implementation technique. > > Say what?!? To eliminate the bridge code between GUI and > the rest of the application you introduced the notion of > ListItem, etc. in the two domains inheriting from common > objects. What else was SM_Object, etc. about? Implementation. Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: Re: (SMU) Need help conceptualizing lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > I do regard the update formula for a (M) attribute to be > part of the OOA. (I hope there is no disagrement that > is_highlighed is (M)). I do consider my DFD to be part of > _my_ model. But its not part of an SM-OOA (because OOA96 > is very vague about how (M) attributes work). OK, that fixes a lot of the problem that I had. > > Next question: what do you use to indicate that > > relationships and instances are being instantiated/ > > removed in GUI on the fly as a result of activities > > in another appliation domain? > > Hopefully you can guess that by now. The client domain > knows nothing about it. The server domain either has > SDFDs or has some (W) identifiers/attributes. Only the > bridge knows that the change in one domain is related > to a change in the other. I guess my question is: is there something you would tack onto the relationship or object, analogous to '(W)' or '(M)' for attributes, that provides the clue that there is an SDFD issue? The reason I ask is because you said that you use the '(W)' as a shorthand for 1-process SDFDs. The implication is that you would not use the '(W)' if you actually had a more complicated SDFD. I think I would like to see it _always_ on the IM as a clue that something special was going on in the SDFD. > > > I never proposed using inheritance to specify a bridge. > > > My use of inheritance is an implementation technique. > > > > Say what?!? To eliminate the bridge code between GUI and > > the rest of the application you introduced the notion of > > ListItem, etc. in the two domains inheriting from common > > objects. What else was SM_Object, etc. about? > > Implementation. Yes, but my issue was around the original notion of reducing coupling that prompted this thread. If you have to generate manually, then this level of implementation coupling is relevant -- you have to live with it while maintaining the code. I also see it as a more general issue for the architecture itself. I see the inheritance mechanism as basically fragile -- it can easily be broken as new bridging needs are encountered so that one ends up having to refactor the architecture as new application contexts are encountered. As I indicated before, I tend to lean towards the write-once school of architecture for a particular environment. [Though an adequate write-once architecture is a daunting task -- a classic situation where economies of scale demand specialization into the cottage industry of architecture OTS vendors we see today. When the state-of-the-art gets to the point where optimization gets done decently, I would expect in-house architectures to be a very rare breed.] -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Synchronous services? lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Moore... > Every object has a lifecycle including Transaction. Have the Batch hold the > number of post events it sends in an attribute. Then have the Transaction send > an event back to the Batch when it has finished. The Batch has a state with a > reflexive transition where it decrements the attribute until it reaches zero > whereupon it generates an event to itself to indicate that the batch has been > processed. I would question whether Transaction has a life cycle for the example cited. As originally described it sounds very much like pass-through data where this domain is not at all interested in the specific values. In fact, if the only error processing necessary is to send an event back to the client announcing the failure, I think I could make a case that Whipp is right and this entire domain is either unnecessary or completely passive (i.e., everything can be done in the bridges). The Batch object might well be moved into the client and its life cycle could perform all the relevant activities using a transform as I described previously. > I think this question cuts to the heart of SMs weakness. The requirement for > asynchronous communication is tiresome and worse misleading (when the > architecture is synchronous). The goal of an S-M OOA is to describe the fundamental logic of the solution in a manner that is implementation independent. Whether asynchronous communication is actually needed is an implementation issue that depends upon the particular computing environment. For every non-trivial synchronous application I can hypothesize a porting situation to a distributed environment where asynchronous processing would be required. S-M merely requires that the OOA can be applied in any computing environment without modification. At worst it is simply the price one has to pay of true implementation independence. However, I think a lot of the perceived difficulties with modeling asynchronous behavior actually stem from incorrect modeling of domains. One of the best clues that the level of abstraction for a domain (or even an object) is too complex is the presence of overly complicated state machines. We have been burned by this sort of thing on several occasions so that as soon as we start seeing complicated state machines the alarm bells go off and we start looking more carefully at the domain mission statement. Most times we find that we can raise the level of abstraction and relegate purely algorithmic processing to an implementation or architectural domain via transforms. In doing so the number of states and events are significantly reduced. Another problem arises when state machines are not context independent. Ideally a state machine should be developed independently of context. It should reflect only the intrinsic behavior of the object (appropriate for the domain's level of abstraction). If this can be done properly, asynchronous behavior mostly comes for free. While this is admittedly difficult to do, I do not see it as a weakness inherent in dealing with asynchronous communication -- it simply requires a mental discipline to ignore context when building a state machine. [Note: a state machine in a particular domain _does_ depend upon the domain's context -- different views of the same real world entity in different domains can have quite different state machines. However, the domain's level of abstraction and the context of other objects define a suite of requirements on the object's functionality (e.g., that someone else must be notified when a particular activity is completed). The object's state machine is then built independently to satisfy those requirements. Jumping directly to defining state machines without taking the time for the intermediate step of identifying their functional requirements is probably a large part of the problem when state machines get polluted by context.] -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Synchronous services? lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Kearnan... > For some reason, counting events seems to be a bit of an implementation, > rather than analysis. Not sure why it seems this way...maybe > because it is treating the events as if they were objects within > the analysis. I will come out with a definitive Maybe here. IF keeping track of how many events is crucial to the domain's processing, then counting is not implementation. For example, if the domain cares about the processing of individual transactions (e.g., it does one thing when the Blue Transactions are done and something else when the Green Transactions are done), then it is fundamental to the solution to have a mechanism to know when the transactions are done. Counting events is one mechanism while your solutions are alternatives. Selecting one over the other will probably depend upon how other domain requirements are satisfied (i.e., the whole is greater than the parts) and personal style. As I argued in another message in this thread, I don't think this domain cares about individual transactions at all. If that is the case, then counting the events is an implementation issue and it doesn't belong in the this domain. Similarly, any of your solutions would also reflect implementation. That is, all of the writing of individual transactions should be relegated to another implementation or architectural domain. > Here is another possibility. Each transaction object, apon completing > its lifecycle, may not be required and can delete itself. Before > deleting (essentially reaching the end of its terminal state), it > can check to see if it is the only transaction related to the batch, and > if so, send an event to the batch to notify it that all transactions > have been completed. It can then unrelate itself from the batch, and > die. Counting instance relationships vs. response events. I don't see a substantive difference here insofar as the mechanism is concerned. The instance relationship goes away as a direct result of the response event that tells a transaction it can go away. Counting the relationships is just an indirect way of counting the response events -- only the one who counts changes. > If this doesn't work, because the transaction must stay around for > later processing (I don't know this subject matter, so I'm guessing), > maybe another possibity is to subtype the transaction, reflecting > the states of the transaction (I recall this from Leon Starr's book). > > Transaction <---is a ---> Processed transaction > * id > - other info <---is a ---> Unprocessed transaction > - type > > Now, the Batch could send the event to each Unprocessed Transaction > instance it is related to, and that instance could migrate to a > Processed Transaction subtype when processing is completed. After > migrating, it could check to see how many Unprocessed Transactions the > batch is related to, and, if the batch is no longer related to any > Unprocessed transactions, notify the Batch instance that all > transactions are complete, if this is the case. I would only buy this mechanism only if there was something else in the problem space that justified distinguishing between Processed vs. Unprocessed. If the only justification is to avoid counting response events, then one is substituting a complex mechanism for a simple one to do the same thing. Also, one would be introducing a static model element (another object) to resolve what is inherently a dynamic problem (i.e., correct flow of control). I would want a more substantial reason for adding objects. But if they are justified otherwise, then migration could be an elegant mechanism for capturing when processing is completed. > It occurs to me that under the simultaneous concept of time, there > may be some race conditions if the last two or more transactions were > completing their processing at the same time. They may check to see > how many Unprocessed transactions there are, and happen to get a > count that changes immediately after it was checked. > > Any ideas on how to fix this problem? I'm trying to avoid having > each transaction send back a "transaction complete" event. Another reason for solving the problem more directly. B-) Counting response events in a single Batch instance is simple and foolproof at the OOA level. Solving the race condition problem requires more work, either in the OOA or in the architecture. Probably the simplest solution would be for the architecture to block processing of the response event type so that only one at a time is processed. The downside is that one is hiding an important exception to the simultaneous view behind colorization. > > I think this question cuts to the heart of SMs weakness. The requirement for > > asynchronous communication is tiresome and worse misleading (when the > > architecture is synchronous). > > The only time I think it gets difficult to implement a syncronous > architecture is when you send an event to an object and expect a > reply. If you implement the event generation as an actual call > the the state of the receiving object, you run into the problem > that the sending object can't receive any events (under OOA rules) > until it has completed processing of the previous state. If the > receiving object then tried to place a syncronous call back to the > sending object, the rules would be violated. This problem gets > really ugly if more than two objects are involved in a thread > of control. > > Has anyone worked out this issue? I've given it only cursory thought, > since our architecture is async. I believe Moore's point was that it is harder to make an OOA work correctly under asynchronous assumptions than under synchronous assumptions. The concern you are expressing is about a particular way to implement a synchronous architecture. I would argue that the situation with which you are concerned is not synchronous. An event loop that does lead to such a situation implies an ambiguous view of time precisely because the actions are not reentrant (i.e., the value of the attribute data is ambiguous because it is shared across invocations). Put another way, a domain is synchronous only if all event threads promulgating from each external event can be demonstrated to not violate the rule about only one instance executing at a time and to terminate (i.e., reach an action that generates no events). -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Need help conceptualizing "Leslie A. Munday" writes to shlaer-mellor-users: -------------------------------------------------------------------- On Mon, 25 Oct 1999 09:50:08 lahman wrote: >lahman writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >Responding to Whipp... > etc .. I must admit to have not been following this thread as closely as I'd like, but the longer it goes on and the closer Mssrs. Whipp and Lahman come to agreeeing, the more I'm beginning to understand. One observation from this thread, which I believe is in reply to a small Domain problem posted a few weeks ago is: The original mail asked for a solution to a relatively small domain problem. With an expert (like Mr. Whipp OR Mr. Lahman) working on the problem the solution was derived within a couple of days. With two experts (like Mr. Whipp AND Mr. Lahman) working on the problem, the solution should be derived in about three weeks. My contribution, Leslie. __________________________________________________________________ Get your own free England E-mail address at http://www.england.com Subject: Re: (SMU) Synchronous services? "Leslie A. Munday" writes to shlaer-mellor-users: -------------------------------------------------------------------- -- On Mon, 25 Oct 1999 11:13:43 lahman wrote: >lahman writes to shlaer-mellor-users: >However, I think a lot of the perceived difficulties with modeling asynchronous >behavior actually stem from incorrect modeling of domains. One of the best clues >that the level of abstraction for a domain (or even an object) is too complex is the >presence of overly complicated state machines. We have been burned by this sort of >thing on several occasions so that as soon as we start seeing complicated state >machines the alarm bells go off and we start looking more carefully at the domain >mission statement. Most times we find that we can raise the level of abstraction >and relegate purely algorithmic processing to an implementation or architectural >domain via transforms. In doing so the number of states and events are >significantly reduced. > This statement reminds me of a thread I got involved with on the OTUG, and never got around to fully explaining my argument. That was to do with the UML notation of statecharts, which allows the use of nested and parallel states. My argument sort of boils down to what is stated above - that if your state machines become so complicated that you feel the urge to introduce parallel states or nested states in order to make them readable, then your state machines are too complex. In this situation, one should got back to your object diagrams and reconsider your objects and their relationships. Remember that statecharts originally came out of the structured analysis days as a means of doing functional decomposition without the use of Data Flow Diagrams. I question their use in OO development as wholly inappropriate. Why not reintroduce nested DFDs while we're at it? Sorry if I went a bit off topic, Leslie. __________________________________________________________________ Get your own free England E-mail address at http://www.england.com Subject: Re: (SMU) Synchronous services? Chris Curtis writes to shlaer-mellor-users: -------------------------------------------------------------------- > Let's assume you are using a commercial RDB that routinely handles > this idea of posting batched updates so that the database will be > unmodified if one of them fails (i.e., the RDB has the notion of a > transaction around the updates that is not committed until they > have all been successfully processed). Since it is 3rd party > software, it is in another domain. Unfortunately terminology is overloaded a bit here. Transactions in my world are unrelated to transactions in the RDB sense. A Transaction is a debit, a payment, or a credit to a customer's account. A Transaction can either succeed or fail; a Batch needs to know only that some of its associated Transactions failed (not which one). A Batch with failed Transactions does not get rolled back; the failed Transactions must be either resubmitted (possibly after modification) or deleted. > At this point an battery of your warning lights should all be glowing. The clue is that the processing of each individual Transaction posting is identical, down to the handling of an error; nothing depends upon the contents of Transaction. That is, processing is algorithmic and no decisions made during that processing affect flow of control, other than *an* error occurred that prevented the batch from being posted. This tells me that my level of abstraction is not correct for the domain. > > So when might the domain with Batch get more complicated? It will > get more complicated if the outcome of processing of an individual > Transaction is important. Suppose the notion of posting in the > application does not require all-or-none posting. Instead you only > need to know when all of the transactions have been posted, but it > may take several, incremental tries to do so. Now if one > transaction fails you have to keep track of which are actually > posted and which aren't and eventually you will have to recycle > through the posting operation to get the rest posted. This is a more accurate view of what is supposed to happen, with the added requirement that the retry operation is manually triggered, and that Transactions in the Batch can be modified or removed. > One way to do this would be to let the bridge update a flag > attribute in Transaction to indicate successful posting and > increment a count attribute in Batch so you know when all the > transactions have been posted. Aside from a couple of IFs in > appropriate actions, everything would look pretty much the same. > This is fairly elegant, but the flow of control is not so obvious. I thought of this initially but wasn't real fond of keeping a counter, mostly for the reasons you go into below. I think this is somewhat tied into a RDB-jargon usage of transaction, too. > A second way would be to essentially revert to my state diagram > snippet above and modify it so you could set the flag and count > attributes whenever 'Post Successful' was encountered. It is a bit > more complicated (probably an extra state) because you don't want > to update attributes at the 'Batch Verified' transition, but there > is no doubt about what is driving the flow of control. > This wouldn't change my state model too much...I was just stuck on thinking synchronously. I couldn't figure out a way to loop through the list of related Transactions AND keep track of whether they all posted or not without leaving the loop that has the list of transactions. I still don't like keeping a counter that counts the number of instances of objects. It is separate from the actual number of instances and therefore could be out of sync with reality. Does this clarification that a Transaction has NOTHING to do with the DBMS concept of 'transaction' change any of the suggestions or analysis so far? ------------------------------ Chris Curtis Systems Engineer Satel Corporation "Where am I, and what am I doing in this handbasket?" Subject: RE: (SMU) Need help conceptualizing David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- > lahman wrote: > I guess my question is: is there something you would > tack onto the relationship or object, analogous to '(W)' > or '(M)' for attributes, that provides the clue that > there is an SDFD issue? > > The reason I ask is because you said that you use the > '(W)' as a shorthand for 1-process SDFDs. The implication > is that you would not use the '(W)' if you actually had a > more complicated SDFD. I think I would like to see it > _always_ on the IM as a clue that something special was > going on in the SDFD. If I mark an attribute with (W) in addition to constructing an SDFD then I violate one-fact-in-one-place. The (W) indicates that the domain does not know when the attribute is modified. If I have an SDFD that does the modifying, then the SDFD knows, thus something within the domain knows. This does not mean that I do not see merit in noting the specialness of an attribute that is modified from an SDFD. The thing that is special is that there is no path of sequential constraint that allows the domain to make use of the knowledge embodied in the SDFD. In other words, there is no way to synchronize the access. One way round this is to use the concurrent model of time. In this, actions are atomic. So the action itself provides sequential constraint to synchronize the SDFD. But I like simultaneous time. This does not provide atomic behaviour. So I find it necessary to add it. There are a number of options. One is to add atomic accessor processes. Another is to mark an attribute as "synchronised" (cf java). Of these two, the latter is one-fact-in-one-place. The former may require pairs of processes. This still does not provide enough primatives to implement a 'test-and-set' action. For this, again there are 2 approaches: we can provide a test-and-set accessor, or we can mark an ADFD/SDFD as "atomic". In this case, I prefer the atomic process, because all xDFDs accessing an attribute may need to be atomic (I'm not sure about this). I could play devils advocate on this but I'll leave that to someone else for now. But it is possible to argue that (a) it is possible to implement a test-and-set, and (B) if you need it then your IM is wrong. > Yes, but my issue was around the original notion of reducing > coupling that prompted this thread. If you have to generate > manually, then this level of implementation coupling is > relevant -- you have to live with it while maintaining the > code. Actually, the thread was about conceptualizing. It is may experience decoupling/indirection hinders conceptualization. Its far easier to see how a direct implementation works than an indirect one. Its also easier to build. Once you understand what's going one, then you can work out where to add the layers of indirection. Inheritance is a valid implementation of counterparting. More than that, counterparting _can_ be a valid use of inheritance! (valid in terms of Liskov, DIP, SIP, SAP, etc). "Can" means that there are times when it would not be valid. If the base class is more volatile than the derived class, then it's invalid. As a matter of interest, what do you think about inheriting from the architecture. I.e. how do you feel about class Dog extends architecture.Object {...} or class Dog implements architecure.Object {...} Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: Re: (SMU) Synchronous services? lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Curtis... > Unfortunately terminology is overloaded a bit here. Transactions in > my world are unrelated to transactions in the RDB sense. A > Transaction is a debit, a payment, or a credit to a customer's > account. A Transaction can either succeed or fail; a Batch needs > to know only that some of its associated Transactions failed (not > which one). A Batch with failed Transactions does not get rolled > back; the failed Transactions must be either resubmitted (possibly > after modification) or deleted. That is pretty much what I assumed. I tried to capitalize your Transaction and I tried to qualify the RDB's transaction with 'database' or 'RDB' to make the distinction. > > So when might the domain with Batch get more complicated? It will > > get more complicated if the outcome of processing of an individual > > Transaction is important. Suppose the notion of posting in the > > application does not require all-or-none posting. Instead you only > > need to know when all of the transactions have been posted, but it > > may take several, incremental tries to do so. Now if one > > transaction fails you have to keep track of which are actually > > posted and which aren't and eventually you will have to recycle > > through the posting operation to get the rest posted. > > This is a more accurate view of what is supposed to happen, with > the added requirement that the retry operation is manually triggered, > and that Transactions in the Batch can be modified or removed. Is the decision to modify vs. delete failed Transactions made in this domain or is it also an manual decision? Assuming the latter, then this domain still does not need to know much about Transactions. It could ship the failed transaction off to a UI domain (either a GUI or something that prints them). Then it would receive back an event(s) to either replace the entire transaction (to preserve the domain's ignorance about individual fields) or to delete it. Finally it would receive an event requesting that the failing Batch(es) be reposted. In this scheme the mechanics of the review is outside this domain and it doesn't really matter how it is done. So the level of abstraction hasn't changed much. However, it is now fundamental to the domain's mission that it track which Transactions have failed and which have posted successfully. > > One way to do this would be to let the bridge update a flag > > attribute in Transaction to indicate successful posting and > > increment a count attribute in Batch so you know when all the > > transactions have been posted. Aside from a couple of IFs in > > appropriate actions, everything would look pretty much the same. > > This is fairly elegant, but the flow of control is not so obvious. > > I thought of this initially but wasn't real fond of keeping a > counter, mostly for the reasons you go into below. I think this is > somewhat tied into a RDB-jargon usage of transaction, too. Given the description above, I don't think the counter is bad. In fact, I would argue that Batch could have three attributes -- a count of associated Transactions, a count of processed Transactions, and a count of successfully posted Transactions -- assigned in the IM before you even thought about state machines. The mission of Batch was modified above so that it is responsible for processing Transactions and tracking their status. Such attributes would be a logical part of the Batch abstraction that is independent of the mechanics of the actual posting in the state machine. This is reasonable information that the domain might be asked to supply to a client (and, in fact, probably would have to supply for each Batch for something like a report on the status of overnight processing). > This wouldn't change my state model too much...I was just stuck on > thinking synchronously. I couldn't figure out a way to loop through > the list of related Transactions AND keep track of whether they all > posted or not without leaving the loop that has the list of > transactions. I still don't like keeping a counter that counts > the number of instances of objects. It is separate from the actual > number of instances and therefore could be out of sync with > reality. As the responsibilities are currently described, even that state model may be overly complex. You probably don't need the event to handle an error if this domain doesn't process errors, but simply records them (e.g., a flag is set in Transaction to indicate it failed). When the count of Transactions processed equals total associated Transactions, the event signaling completion of the posting operation causes transition to a Batch state that navigates the relationship and collects all the failed Transaction identifiers to ship off to the client for review. > Does this clarification that a Transaction has NOTHING to do with > the DBMS concept of 'transaction' change any of the suggestions or > analysis so far? I don't think so. The important issue is to keep the RDB transaction out of this domain entirely. Even if the RDB can't handle a flock of Transactions together (i.e., in a single access) and return status on each one, I would still seek to have this domain batch up the Transactions into one outgoing bridge event via a transform. This might look something like: +-----------+ Batch | Post | // This state formats outgoing --------->| Xactions | // event to RDB domain to post Verified | | // all Transactions +-----------+ | | Xaction Posted | v +-----------+ | Check | // This states updates counts and | Posting |--+ // Transaction status until all | | | // are accounted for +-----------+ | | ^ | Xaction Posted | | | Posting | +-----+ Complete | v +-----------+ // This state reports overall | Report | // posting status and a list | Status | // of failed Transaction to +-----------+ // client, as necessary. The 'Post Xactions' state has a transform to format the event data sent to the bridge to the RDB domain. It would generate that event and then sit on its thumbs. The bridge would open the RDB transaction, post values, and close the RDB transaction. As each value was posted the bridge would send back a pass/fail response event ('Xaction Posted') with that post's Transaction identifier. The 'Check Posting' state would update the flag in the relevant Transaction if it failed and update the count attributes in Batch. It would check for when the processed and total counts matched, in which case it generates 'Posting Complete'. Then the 'Report Status' state provides whatever the client wants in the way of reporting. [Another commonly used technique is to employ a low level implementation domain as a shell for the RDB, much like a GUI domain might interface to a OS window manager. That domain would parse the incoming event and handle the logic described. I tend to lean in this direction because I am prejudiced against complex bridges so I want to see complicated translations in the OOA.] [Notational quibble. I usually make event names announcements and state names imperatives. A lot of people advocate making them both announcements since the instance occupies a state *after* the action has completed. My justification for deviance from the purist form is that the notation displays the state action description in the STD box, so the box title should be a mnemonic for that action description.] -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Need help conceptualizing lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Munday... > The original mail asked for a solution to a relatively small domain problem. > > With an expert (like Mr. Whipp OR Mr. Lahman) working on the problem the solution was derived within a couple of days. > > With two experts (like Mr. Whipp AND Mr. Lahman) working on the problem, the solution should be derived in about three weeks. Humor appreciated, but for the benefit of newcomers to SMUG I would point out that much of the discussion was not about the mainline methodology. In particular, (1) Most of the conversation was about the architecture, rather than the original modeling question. The architecture is admittedly not yet well defined in the methodology. (2) Most of the conversation was about Whipp's proposal for implicit bridges, '(W)', etc., which are innovations not currently present in RD. Thus this was more of an exposition against a Devil's Advocate. (3) Whipp and I were the participants. Thus this was simply the normal course of affairs. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) State diagrams lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Munday... > Remember that statecharts originally came out of the structured analysis days as a means of doing functional decomposition without the use of Data Flow Diagrams. > > I question their use in OO development as wholly inappropriate. Why not reintroduce nested DFDs while we're at it? You're just pulling my chain, right? Trying to elicit a knee-jerk, OTUG-style response. But for the benefit of the lurkers... First, I think the S-M use of STDs is quite different than their use in SA. In S-M they are limited to the dynamic description of a single object, which is very much in keeping with the OO principle of encapsulation of both data and the operations on that data. Second, I think that forcing *all* significant behavior to the represented in state machines is crucial to the methodology. By doing so one is forced to use the traditional pure message paradigm (i.e., events -- address, type, and data packet) for communications between objects -- something that is rapidly being lost in other methodologies because they mirror the compromises made in OOPLs where message is equated to method. Thus S-M preserves the "I'm Done" form of message rather than the context-dependent, "Do This" form. Third, STDs enforce a limitation on the degree of coupling between objects, which is also very much in keeping with the fundamentals of OT. A pure data transfer paradigm narrowly defines the information that can pass between objects. [A side effect of the OT approach is that the call graph for OT programs is at least as bad, if not worse, than the call graphs for SA programs. Therefore, it is important to at least limit the degree of coupling if not the quantity.] Fourth, the use of STDs flattens the flow of control to a breadth-first description rather than the depth-first approach inherent in functional decomposition. The state actions are connected by a web of events, describing the high level flow of control at a particular level of abstraction. Meanwhile the state actions have transforms and synchronous services that provide algorithmic support. However, this support is severely limited in that these mechanisms cannot communicate with other objects. The closest one comes to the depth-first paradigm is represented by wormholes. But in this case the mechanism is again firmly restrained by the domain and bridging philosophy. Last, but not least, I think this view is unfair because it does not consider the overall package represented by the methodology. S-M does not have any special notation; it simple reuses the existing ERD, STD, DFD, etc. notations in a new context. One could argue that none of these are OT. However, the package that combines them certainly is. Thus one can't say simply that STDs are not OT. One would have to say that STDs as employed by S-M are not OT and I think that is a much tougher case to make. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Need help conceptualizing Allen Theobald writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Munday... > The original mail asked for a solution to a relatively small > domain problem. Yes, I did. :^) But it wasn't till now that I realized that I was asking pretty much an architectural question. I, for one, found the whole thread fascinating and hope it continues. Since the architecture is "admittedly not yet well defined in the methodology" what are some insights for hand-coding architectures? We don't use a tool. You'd think that there would be (here comes the p-word) patterns involved. :^) Kind Regards, Allen Theobald Nova Engineering, Inc. Subject: Re: (SMU) Need help conceptualizing Chris Curtis writes to shlaer-mellor-users: -------------------------------------------------------------------- > Allen Theobald writes to shlaer-mellor-users: > Since the architecture is "admittedly not yet well defined in the > methodology" what are some insights for hand-coding > architectures? We don't use a tool. You'd think that there > would be (here comes the p-word) patterns involved. :^) > A very good question, and one I was about to ask. :-) I have a decent grasp of the straightforward architecture (1 object = 1 class, FSM, events) but am a bit lost when trying to picture other possible architecture implementations, say, a DBMS-centric architecture. Hoping for some good discussion... -Chris ------------------------------ Chris Curtis Systems Engineer Satel Corporation "Where am I, and what am I doing in this handbasket?" Subject: Re: (SMU) State diagrams "Leslie A. Munday" writes to shlaer-mellor-users: -------------------------------------------------------------------- On Tue, 26 Oct 1999 10:13:37 lahman wrote: >lahman writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >Responding to Munday... > >> Remember that statecharts originally came out of the structured analysis days as a means of doing functional decomposition without the use of Data Flow Diagrams. >> >> I question their use in OO development as wholly inappropriate. Why not reintroduce nested DFDs while we're at it? > >You're just pulling my chain, right? Trying to elicit a knee-jerk, OTUG-style response. But for the benefit of the lurkers... > >First, I think the S-M use of STDs is quite different than their use in SA. In S-M they are limited to the dynamic description of a single object, which is very much in keeping with the OO principle of encapsulation of both data and the operations on >that data. > H. I think it's time for that capful of Drano. :-) My statement refers to statecharts and their use in OO development. Your reply (which as a S-Mer) is wholly correct and talks about STDs and their use in OO development. Not the same thing (IMO). Les. > >-- >H. S. Lahman There is nothing wrong with me that >Teradyne/ATD could not be cured by a capful of Drano >MS NR22-11 >600 Riverpark Dr. >N. Reading, MA 01864 >(Tel) (978)-370-1842 >(Fax) (978)-370-1100 >lahman@atb.teradyne.com > > > __________________________________________________________________ Get your own free England E-mail address at http://www.england.com Subject: Re: (SMU) Need help conceptualizing "Leslie A. Munday" writes to shlaer-mellor-users: -------------------------------------------------------------------- On Tue, 26 Oct 1999 09:43:17 lahman wrote: >lahman writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >Responding to Munday... > >> The original mail asked for a solution to a relatively small domain problem. >> >> With an expert (like Mr. Whipp OR Mr. Lahman) working on the problem the solution was derived within a couple of days. >> >> With two experts (like Mr. Whipp AND Mr. Lahman) working on the problem, the solution should be derived in about three weeks. > >Humor appreciated, but for the benefit of newcomers to SMUG I would point out that much of the discussion was not about the mainline methodology. In particular, > Humour! What humour? :-) Ok, I didn't put a smiley because there was just an incy bit of seriousness to my reply. Leslie. __________________________________________________________________ Get your own free England E-mail address at http://www.england.com Subject: RE: (SMU) Need help conceptualizing "Levkoff, Bruce" writes to shlaer-mellor-users: -------------------------------------------------------------------- Please see the November issue of Embedded Systems Programming for an example of a naive architecture. All patterns included. Bruce -----Original Message----- From: Chris Curtis [mailto:chris@satel.com] Sent: Tuesday, October 26, 1999 9:42 AM To: shlaer-mellor-users@projtech.com Subject: Re: (SMU) Need help conceptualizing Chris Curtis writes to shlaer-mellor-users: -------------------------------------------------------------------- > Allen Theobald writes to shlaer-mellor-users: > Since the architecture is "admittedly not yet well defined in the > methodology" what are some insights for hand-coding > architectures? We don't use a tool. You'd think that there > would be (here comes the p-word) patterns involved. :^) > A very good question, and one I was about to ask. :-) I have a decent grasp of the straightforward architecture (1 object = 1 class, FSM, events) but am a bit lost when trying to picture other possible architecture implementations, say, a DBMS-centric architecture. Hoping for some good discussion... -Chris ------------------------------ Chris Curtis Systems Engineer Satel Corporation "Where am I, and what am I doing in this handbasket?" Subject: Re: (SMU) Need help conceptualizing lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > If I mark an attribute with (W) in addition to constructing > an SDFD then I violate one-fact-in-one-place. The (W) > indicates that the domain does not know when the attribute > is modified. If I have an SDFD that does the modifying, then > the SDFD knows, thus something within the domain knows. I would argue that they are different facts. If we interpret the '(W)' to mean that the attribute is being updated from elsewhere (What is happening), then I think that is different than knowing How the attribute is updated, which is what the SDFD defines. Note that the OCM duplicates information about where events are targeted. But I think this is a similar situation. The event address for an event generator process in an action provides an instance identifier for an object (the How of communication analogous to the SDFD) while the OCM provides a generic description of object interaction (the What of communication analogous to the '(W)'). > This does not mean that I do not see merit in noting the > specialness of an attribute that is modified from an SDFD. > The thing that is special is that there is no path of > sequential constraint that allows the domain to make use > of the knowledge embodied in the SDFD. In other words, > there is no way to synchronize the access. My wife once described me as, "a dear boy, but easily confused," and I think that applies here. I thought the SDFD was providing the definition of the circumstances of the attribute update. More importantly, it was providing a description that the translation could use to ensure integrity (i.e., synchronization in the sense that foot shooting could be precluded). I am also confused about whether this paragraph is predicated on having both a '(W)' and an SDFD or only a '(W)'. > One way round this is to use the concurrent model of time. > In this, actions are atomic. So the action itself provides > sequential constraint to synchronize the SDFD. I am still in confusion here, probably because 'atomic' and 'concurrent model' are overloaded. When you say 'concurrent' do you mean that as synonymous with S-M's 'interleaved'? Also, I thought an action was always atomic (i.e., its state machine cannot process other events) regardless of whether state machines were running concurrently or not. > But I like simultaneous time. This does not provide atomic > behaviour. So I find it necessary to add it. There are a > number of options. One is to add atomic accessor processes. > Another is to mark an attribute as "synchronised" (cf java). > Of these two, the latter is one-fact-in-one-place. The > former may require pairs of processes. > > This still does not provide enough primatives to implement a > 'test-and-set' action. For this, again there are 2 approaches: > we can provide a test-and-set accessor, or we can mark an > ADFD/SDFD as "atomic". In this case, I prefer the atomic > process, because all xDFDs accessing an attribute may need > to be atomic (I'm not sure about this). > > I could play devils advocate on this but I'll leave that to > someone else for now. But it is possible to argue that (a) > it is possible to implement a test-and-set, and (B) if you > need it then your IM is wrong. Given my current state of confusion, you're going to have to put a lot more words around this before I have a clue what you are talking about. > Inheritance is a valid implementation of counterparting. More > than that, counterparting _can_ be a valid use of inheritance! > (valid in terms of Liskov, DIP, SIP, SAP, etc). "Can" means > that there are times when it would not be valid. If the base > class is more volatile than the derived class, then it's > invalid. > > As a matter of interest, what do you think about inheriting > from the architecture. I.e. how do you feel about > > class Dog extends architecture.Object {...} > or > class Dog implements architecure.Object {...} I don't have a problem with it in the implementation. I assumed that the original SM_object, etc. were architectural objects. What else would they be if they were defined only in the implementation? My only objections all along have been to (a) the coupling (in the manual generation case) and (b) the fragility of inheritance for bridges. Bridges are application dependent (i.e., they depend upon specific domain interfaces that are relatively arbitrary) which strikes me as potentially far more complex than mapping an OOA construct into a member of a relatively small set of relevant architectural mechanisms. My worry is that Dog will really inherit from architecture.FourLeggedObject, which inherits from architecture.CritterObject, which inherits from architecture.Object and every time you have a new application you will have to go in an tinker with that tree. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: RE: (SMU) Need help conceptualizing David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- > I would argue that they are different facts. If we interpret > the '(W)' to mean that the attribute is being updated from > elsewhere (What is happening), then I think that is different > than knowing How the attribute is updated, which is what the > SDFD defines. But this is not the meaning I attach to (W) :-) I use (W) to mean "updated from outside the domain". I consider an SDFD to be part of a domain. I would rather use a different notation (or annotation) to indicate that an SDFD may update it. > Note that the OCM duplicates information about where events are > targeted. An OCM is a derived view. None of its information is unique to that diagram. In contrast, the OIM is a primary view. > > This does not mean that I do not see merit in noting the > > specialness of an attribute that is modified from an SDFD. > > The thing that is special is that there is no path of > > sequential constraint that allows the domain to make use > > of the knowledge embodied in the SDFD. In other words, > > there is no way to synchronize the access. > > [...] I thought the SDFD was providing the > definition of the circumstances of the attribute update. No, the SDFD defines the behaviour following an external message. > More > importantly, it was providing a description that the translation could > use to ensure integrity (i.e., synchronization in the sense that foot > shooting could be precluded). AFAIK, it does not add additional synchronization. It might seem nice if it did, but I think that would be a mistake. The concept of synchronization is useful outside of an SDFD, so the concept should not be coupled to the SDFD. > I am also confused about whether this > paragraph is predicated on having both a '(W)' and an SDFD or only a > '(W)'. either a (W) or an SDFD. > > One way round this is to use the concurrent model of time. > > In this, actions are atomic. So the action itself provides > > sequential constraint to synchronize the SDFD. > > I am still in confusion here, probably because 'atomic' and > 'concurrent > model' are overloaded. When you say 'concurrent' do you mean that as > synonymous with S-M's 'interleaved'? Oops. Sorry, I did mean "interleaved". > Also, I thought an action was > always atomic (i.e., its state machine cannot process other events) > regardless of whether state machines were running concurrently or not. In the simultaneous interpretation, different instances/objects can execute their own state machines simulataneously. In the interleaved interpretation, only one action within a domain executes at a time. If multiple actions execute simultaneously, then it is possible for 2 actions to access an attribute at the same time, with undefined behaviour. I assume that simultaneous time also allows an SDFD to execute simultaneously with a state action. There is no way for a domain model to constrain when an SDFD is triggered. So an SDFD can always corrupt an attribute with a badly timed access. [...] > > This still does not provide enough primatives to implement a > > 'test-and-set' action. For this, again there are 2 approaches: > > we can provide a test-and-set accessor, or we can mark an > > ADFD/SDFD as "atomic". In this case, I prefer the atomic > > process, because all xDFDs accessing an attribute may need > > to be atomic (I'm not sure about this). > Given my current state of confusion, you're going to have to > put a lot more words around this before I have a clue what > you are talking about. The problem is that we need to guarantee the attributes can't be corrupted in simultaneous time. There are 2 types of corruption. One is that, in the middle of a set accessor, we have no guarantee that the value of the attribute is either the before- or after- value. A get-accessor could return *any* value, even one outside the attribute's attribute-domain. If we successfully avoid that hazzard, then the second problem occurs when we attempt to modify a value. Say we read an attribute, determine a new value, then write it. Another action might write its own value between the the initial read and final write. If this happens in a bank, then your bank account may lose a credit. A simple way to avoid this hazzard is to provide atomic read-modify processes, the simplest being "test-and-set". Another simple way is to provide a "synchronised" notation for an entire ADFD/SDFD. This would require the behavior of that xDFD to follow the interleaved time interpretation, while allowing other xDFDs to be simultaneous Assigners are used to eliminate many of these types of problems, but I'm not sure that all are eliminated -- especially when SDFDs enter the picture. Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: Re: (SMU) Need help conceptualizing lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Theobald... > Since the architecture is "admittedly not yet well defined in the > methodology" what are some insights for hand-coding > architectures? We don't use a tool. You'd think that there > would be (here comes the p-word) patterns involved. :^) I think the answer to this depends upon the answer to the question: How much work do you want to put into an architecture and translation? The answer to this is basically a tradeoff between short term commitments and long term benefits. At one extreme you have the Poor Man's Architecture where your really don't have much of one at all. You have a few library classes to implement a queue manager, a smart pointer for relationships, and a list of pointers for M relationships and you build some skeleton module templates that have a built-in infrastructure for efficient instance navigation. Probably your module template inherits from SM_Object, which implements a binary search Find routine to locate a specific instance. (I'll leave it as an exercise for the student to figure out how to make an instance identifier that can be used by a generic Find routine for any object's instances.) >From there you simply code with the models in you lap. We essentially did exactly this on our first pilot project and it wasn't that tough to do -- the process diagrams pretty much translate easily into brute force code. The investment in 'architecture' is literally only a few days and the coding time is hardly different than a project without translation. At the other extreme is true automatic code generation where a sophisticated engine eats the models from the CASE tool (worst case: a CDIFF dump of a graphics tool) and magically produces code. This can take a huge up-front investment in time if you do it yourself but you essentially eliminate coding time, eliminate a lot of test & debug time fixing typos, and get automated model simulation for subsequent projects. [IMO, $50K for an OTS generator is a bargain for this -- if it will do the job for you performance-wise.] Between these two extremes are a lot of gradations. Ideally, there should be nothing to prevent starting with a minimalist architecture and incrementally adding to it over time until one has the Whizz Bang architecture. Unfortunately one still has to do some planning up front to prevent the increments from involving a lot of refactoring. You have to have a vision of where you want to be eventually when you start doing the basics. Doing a preliminary OOA of the translation engine -- that you only partially implement initially -- will cover these bases, but that preliminary OOA is extra work up front. So the decision about the strategy comes down to some very pragmatic cost/benefit issues that probably have little to do with software development. Basically you have to justify the strategy to a collection of marketeers, bean counters, etc. in terms of Dollars Now vs. Dollars Later. [Apocryphal Aside... Several years ago we bought an OTS code generator. It turned out that it was not sufficiently optimized to satisfy our performance requirements for a core component of our system (a Really Big Device Driver). It was fine for everything else, but the device driver was the first major product we built with OT so it had a lot of visibility. Now we want to purchase the latest version of the code generator that solves all the previous problems. But the Group Engineering Manager, who is a Hardware Guy who has to sign the requisition, keeps asking: How do you know that *this one* will work? So we are using a deck stacked by ancient history. 20/20 hindsight indicates that we should have handled the Risk Management a tad better on the original purchase.] The fact the RD is not well defined means that you will probably get as many suggestions as there SMUG Architects. There are a lot of different approaches around. But they usually share some common features at a high level. Most commercial architectures are template based. That is, they represent generic mechanisms that can be tailored to the details of a particular OOA (e.g., by plugging in object names). Such templates apply not just to .hpp/.cpp modules but down to things like 1:M relationship navigation. One can think of these as design patterns in themselves. [A variation on the template theme is script-driven code generation where, say, a perl script is selected to actually write the code for a particular artifact. This is useful for things like relationships that can have implementation fragments (declarations, navigations, etc.) in several different places -- a particular implementation of a relationship in code can be handled by a family of such scripts. All you need is a placeholder in the module template to identify the script family and context.] Commercial architectures also always work from an OOA-of-OOA and they often have an OOA-of-Architecture or some sort of meta model that links OOA constructs to implementation artifacts. The commercial architectures differ primarily in that OOA-O-A or meta model. Another differentiation is the way that the translation rules are implemented (more specifically, the degree of control the end user has). This can range from a fixed set (i.e., hard-wired) through template modifications (i.e., limited access) to complete control over code generation scripts (i.e., open source). The model of a bridge that I described elsewhere (two interfaces and glue code between) is fairly common. Even when people people short-cut it, as Whipp did by inheriting from common implementation objects, they usually acknowledge that model as the starting point. The wormhole paper on the PT website also formalizes bridges at a high level. Most architectures include a library of handy functions. Typically these will implement a variety of things from a smart pointer class for relationships to things like a String class to be used in transforms and sort functions. [One can view something like MFC as such an architectural supporting library.] When incrementally improving a simplistic architecture, this is usually the place one starts -- providing shorthand tools to save coding. Often you can combine several commonly use code lines into a macro or function call with some minor adjustments to the templates. [Hint: when you look at a typical process diagram, most of the bubbles are about obtaining instance references. If you do that brute force, it is a lot of code lines that are awfully similar.] At the megathinker level design patterns certainly play a role. Most people think of an Architecture as suites of alternative implementation mechanisms that can be mapped into a particular OOA construct. One selects the best mechanism through colorization (loosely defined here as providing additional implementation level requirements for the application). In that view those mechanisms are essentially design patterns. For example, if domains are distributed there will be some design model of the communication mechanism (e.g., involving a mailbox or something) represented by a bridge event that qualifies as a design pattern. In the end, though, the least painful way to deal with translation is to get some professional help. Either take one of the RD courses offered by various vendors or hire a consultant to do some mentoring. This is a big topic (the notes from one class fills a 3-inch binder on my bookshelf) and it is evenly divided between Megathinker Concepts and Low Level Tricks. This forum is nice for getting some impressions about RD, but I would not recommend tackling anything other than the Poor Man's Architecture without more sustained and consistent guidance. I would also strongly advise doing a small pilot project using the Poor Man's Architecture before deciding where you want to go from there. This experience is invaluable for asking the right questions of the commercial architecture vendor. It also gives you a basis for evaluating the cost and benefits -- when you have done the brute force generation it will give a lot of insight into how a more sophisticated architecture could help you. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Need help conceptualizing lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > > I would argue that they are different facts. If we interpret > > the '(W)' to mean that the attribute is being updated from > > elsewhere (What is happening), then I think that is different > > than knowing How the attribute is updated, which is what the > > SDFD defines. > > But this is not the meaning I attach to (W) :-) > > I use (W) to mean "updated from outside the domain". I consider an > SDFD to be part of a domain. I would rather use a different notation > (or annotation) to indicate that an SDFD may update it. You originally said that you use the '(W)' as a shorthand when the SDFD is trivial (i.e., a single action as one might have for a synchronous bridge update). It seems to me that in either case the source of the update is outside the domain. The difference lies in the complexity of the update description within the domain. More precisely, the difference lies in whether the SDFD is explicit or implicit. I am simply suggesting that the '(W)' always mean that the attributed is "updated from outside the domain". If an SDFD is present, then the update description is explicit. If an SDFD is not present, then the translation assumes a trivial, implicit update description. This still seems like two different facts. > > Note that the OCM duplicates information about where events are > > targeted. > > An OCM is a derived view. None of its information is unique to that > diagram. In contrast, the OIM is a primary view. Depends on where you are standing. Our original CASE tool did not share this view and we had to create the OCM ourselves. But in principle you are correct. > > > This does not mean that I do not see merit in noting the > > > specialness of an attribute that is modified from an SDFD. > > > The thing that is special is that there is no path of > > > sequential constraint that allows the domain to make use > > > of the knowledge embodied in the SDFD. In other words, > > > there is no way to synchronize the access. > > > > [...] I thought the SDFD was providing the > > definition of the circumstances of the attribute update. > > No, the SDFD defines the behaviour following an external message. Yes, but in the case of the (M) attribute the SDFD also provided guarding to ensure consistency of access within the domain, such as blocking read accessors until the the SDFD processes were complete. I have assumed that same sort of thing happens here (i.e., the architecture will regard the SDFD as an 'action' and block other actions from executing that would access the attribute until the SDFD action is done). In this sense the 'circumstance' is the fact that an SDFD execution was triggered and the set of accesses made by the SDFD itself. > > More > > importantly, it was providing a description that the translation could > > use to ensure integrity (i.e., synchronization in the sense that foot > > shooting could be precluded). > > AFAIK, it does not add additional synchronization. It might seem > nice if it did, but I think that would be a mistake. The concept > of synchronization is useful outside of an SDFD, so the concept > should not be coupled to the SDFD. The synchronization I refer to is quite limited; there is no sense of cooperation and the relevant actions are unaware that there was any synchronization. It is also outside of the SDFD -- it is in the architecture. One can imagine a situation with a complicated SDFD for the update, but the domain doesn't care whether another action accesses the attribute before or after the SDFD update. In that case one might colorize the SDFD with "Don't Bother" to avoid unnecessary blocking and the SDFD doesn't know the difference. > > Also, I thought an action was > > always atomic (i.e., its state machine cannot process other events) > > regardless of whether state machines were running concurrently or not. > > In the simultaneous interpretation, different instances/objects can > execute their own state machines simulataneously. In the interleaved > interpretation, only one action within a domain executes at a time. OK, but that still doesn't explain what you mean by 'atomic'. > If multiple actions execute simultaneously, then it is possible for > 2 actions to access an attribute at the same time, with undefined > behaviour. > > I assume that simultaneous time also allows an SDFD to execute > simultaneously with a state action. There is no way for a domain > model to constrain when an SDFD is triggered. So an SDFD can > always corrupt an attribute with a badly timed access. And this leads me to believe that you were not assuming the same architectural implications that I was (above). What do you have against the notion that the presence of an SDFD (explicit or implicit) can be used by the architecture to prevent the simultaneous execution of certain actions? > The problem is that we need to guarantee the attributes can't > be corrupted in simultaneous time. There are 2 types of corruption. > One is that, in the middle of a set accessor, we have no guarantee > that the value of the attribute is either the before- or after- > value. A get-accessor could return *any* value, even one outside > the attribute's attribute-domain. > > If we successfully avoid that hazzard, then the second problem > occurs when we attempt to modify a value. Say we read an attribute, > determine a new value, then write it. Another action might write > its own value between the the initial read and final write. If this > happens in a bank, then your bank account may lose a credit. > > A simple way to avoid this hazzard is to provide atomic read-modify > processes, the simplest being "test-and-set". Another simple way > is to provide a "synchronised" notation for an entire ADFD/SDFD. > This would require the behavior of that xDFD to follow the > interleaved time interpretation, while allowing other xDFDs to be > simultaneous The second 'simple way' is almost what I have been assuming all along. One thing I missed was that you were looking for a mechanism at the process level rather than the action level. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: RE: (SMU) Need help conceptualizing David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > You originally said that you use the '(W)' as a shorthand > when the SDFD is trivial [...] > I am simply suggesting that the '(W)' always mean that the > attributed is "updated from outside the domain". If an SDFD > is present, then the update description is explicit. If an > SDFD is not present, then the translation assumes a trivial, > implicit update description. This still seems like two > different facts. There are, as you say, two facts. "one fact inone place" implies 2 things: each fact is only in one place; and each place only holds one fact. It is possible for more than one SDFD to update a single attribute. Therefore the presence of an SDFD that updates an attribute marked (W) cannot imply that no implicit SDFD exists for that attribute. Treat (W) as meaning: It is possible to update this attribute via a path that is not explicity defined in the domain. This covers the "implict SDFD" interpretation and the "direct access from a bridge" interpretation. I don't see any reason to mark an attribute in a special way to say that it is updated from an SDFD. A (M) attribute is special because it is completely controlled by its derivation (if you use an auotmatic update mechanism). An SDFD update is no different than an ADFD update. > Yes, but in the case of the (M) attribute the SDFD also > provided guarding to ensure consistency of access within > the domain, such as blocking read accessors until the the > SDFD processes were complete. My DFD mechanism used a DFD, not SDFD. On reflection, it is probably wrong to place too many requirements on a derived attribute: the guarentees of its correctness should be no greater than those of a source attribute (i.e. its only guarenteed in the case of a sequentially constrained path). > I have assumed that same sort of thing happens here (i.e., > the architecture will regard the SDFD as an 'action' and > block other actions from executing that would access the > attribute until the SDFD action is done). In interleaved time, that is a perfectly reasonable approach. In simultaneous time, actions don't block other actions (unless they are different states of the same instance). The question is, should SDFDs always use an interleaved interpretation? It seems trivally obvious to me that the answer must be "No"! simultaneous time is usually employed in distributed systems. To have to synchronise every node in a system simply to invoke an SDFD would have severe performance implications. There is also the danger of deadlocks to consider. In interleaved time, the entire domain is locked during a synchronous wormhole. So no other domain can access that domain until the wormhole is complete. But if the wormhole doesn't complete until the server has some information from the calling domain (this might be for an unrelated thread!) then the system will deadlock. > OK, but that still doesn't explain what you mean by 'atomic'. An atomic process is one where (a) it cannot be interrupted; and (b) any data it uses cannot be modified by another process until the first has completed. > And this leads me to believe that you were not assuming the same > architectural implications that I was (above). I always use simultaneous time. > What do you have against the notion that the presence of an SDFD > (explicit or implicit) can be used by the architecture to prevent > the simultaneous execution of certain actions? Nothing much. but define "certain actions". How does the architecture know to lock some actions, but not others? Whatever criteria you give (unless you have a really fantasic scheme that I haven't considered), you'll find that the criteria is also application to other actions used to synchronize threads. Consider the infamous thread-counter synchronization: one action fires off a set of events -- when these subthreads complete, they send an event back to the originating instance which counts them in. This could be avoided if the subthreads could use the "block certain actions" principle of your SDFD. The could simply say: "mark myself as complete. If all subthreads complete then generate event to originating instance". Such a mechanism would obviously be useful, so I'd prefer to generalize it out of the SDFD and into a notation that defines the "certain actions" for everyone. An advantage to this from the perspective of the architect is that an SDFD looks pretty much like an ADFD -- there's no special synchronization to worry about Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: RE: (SMU) Need help conceptualizing David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Chris Curtis wrote: > I have a decent grasp of the straightforward architecture (1 object = > 1 class, FSM, events) but am a bit lost when trying to picture other > possible architecture implementations, say, a DBMS-centric > architecture. Hoping for some good discussion... I'll start by discussing what architectures _aren't_. Its important tp distinguish between the architecture and the code generator. Many people get away without using an architecture. They simply use a code generator based on the OOA-of-OOA. This is possible because of the distinction between code generators and architectures. Most architectures are structured around the concept of mapping an object to a class. If the code generation is amined at a non-oo language that the architecture will be code generated to use a naming convention instead of a syntax. e.g. An architecture that leads to this C++ code namespace application { class Dog { public: string get_owner() const; }; } may be code generated to C as: const char *appDogGetOwner(const struct appDog*); The architecture for both may be identical -- only the syntax of the code need differ. However, if the object<-->class structure is the basis of most architectures, then most of the differences are elsewhere. There are many ways of implementing an FSM. You can use the "state" pattern for an OO implementation, or a big switch statement. Architecturally, there's not muc difference. Again, the differences may be seen as syntactic, not architectural There are various other aspects of translation that can be treated as syntactic variants for code generators. Architectures need to deal with the more important questions. For example, many people assume that, in a single-threaded implementation. there is an event queue for a domain. But it is also possible to have several event queues. Even one per instance. Multiple queues don't make much difference to overall performace but may, in some situations, achieve better reponse time. The implementation of ADFDs and SDFDs can be interesting. You could treat every dataflow as a high priority event, and use a queue to emulate parallelism in the ADFD. Alternatively, you could flatten it to a sequential implementation. (If your source model uses an ASL, then you may find yourself expanding a sequential model to a parallel implementation). The implementation of attributes, processes, etc. becomes very interesting in a multi-domain model. It becomes quite common to implement the accessors with delegation. The classes don't actually store the data, they simply fetch it from somewhere that does -- for example, a DBMS. Then you've got to worry about caching the data, and packaging the updates to avoid performance problems. I don't have any experience with this. My thoughts often wander towards the more esoteric. For example, consider an object called "Machine Code Operation". Its lifecycle might go through fetch, decode, excecute, writeback states. We might decide to implement it (in hardware) as a 4-stage pipeline. So now we must structure the implementation around the states, and ensure that only 4 active instructions exist at any time. This type of architecture is very radical, and exposes several areas in SM-OOA that are biased towards a software implementation. Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: Re: (SMU) Need help conceptualizing lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > There are, as you say, two facts. "one fact inone place" implies > 2 things: each fact is only in one place; and each place only > holds one fact. > > It is possible for more than one SDFD to update a single > attribute. Therefore the presence of an SDFD that updates an > attribute marked (W) cannot imply that no implicit SDFD > exists for that attribute. > > Treat (W) as meaning: It is possible to update this attribute > via a path that is not explicity defined in the domain. This > covers the "implict SDFD" interpretation and the "direct > access from a bridge" interpretation. A quibble, but you have already allowed that the SDFDs would be in the OOA so if an SDFD is defined, then it is in the domain. That's why I like the '(W)' definition of 'updated outside the domain', which also covers both the implicit and explicit situations. In any case, I still see '(W)' as defining What is happening and the implicit or explicit SDFD defining How it is happening. Different facts; different places. > I don't see any reason to mark an attribute in a special way > to say that it is updated from an SDFD. A (M) attribute is > special because it is completely controlled by its derivation > (if you use an auotmatic update mechanism). An SDFD update > is no different than an ADFD update. And I argue that a '(W)' attribute is special because its update is triggered by activities outside the domain. That makes it special because the analyst may have to take that fact into consideration in designing the rest of the domain. From another viewpoint, it also seems at least analogous to the (R) qualifier. > In interleaved time, that is a perfectly reasonable approach. > In simultaneous time, actions don't block other actions > (unless they are different states of the same instance). The > question is, should SDFDs always use an interleaved > interpretation? Actions do routinely block other actions in the simultaneous view. Any write accessor must block all read accessors and it is often convenient to do such blocking at the action level rather than the process level. It is also not unusual to have specific threads block all other actions (e.g., to process an error). > It seems trivally obvious to me that the answer must be "No"! > simultaneous time is usually employed in distributed systems. > To have to synchronise every node in a system simply to invoke > an SDFD would have severe performance implications. True enough in extreme cases. But I have never heard of anyone who has developed domains with distributed objects -- probably for exactly this reason. B-) If a domain is atomic relatively to distributed boundaries, then blocking actions within the domain does not present any special problems for a distributed system. [It still needs to be done properly or there will still be a performance problem. A deadlock is a deadlock whether it is distributed or not.] > There is also the danger of deadlocks to consider. In interleaved > time, the entire domain is locked during a synchronous wormhole. > So no other domain can access that domain until the wormhole is > complete. But if the wormhole doesn't complete until the server > has some information from the calling domain (this might be for > an unrelated thread!) then the system will deadlock. Hey, nobody said it was easy. > > OK, but that still doesn't explain what you mean by 'atomic'. > > An atomic process is one where (a) it cannot be interrupted; and > (b) any data it uses cannot be modified by another process until > the first has completed. Then an atomic action would have the same properties. If an SDFD is regarding as an action, then it seems to me that (b) is exactly what the architecture must enforce in the simultaneous view. The only extension required is blocking of read access to the '(W)' attribute. > > And this leads me to believe that you were not assuming the same > > architectural implications that I was (above). > > I always use simultaneous time. Some people always do things the hard way. B-) Actually, the implications I was talking about were the architectural blocking of access to the attributes. > > What do you have against the notion that the presence of an SDFD > > (explicit or implicit) can be used by the architecture to prevent > > the simultaneous execution of certain actions? > > Nothing much. but define "certain actions". How does the architecture > know to lock some actions, but not others? The key issue to to prevent access to the relevant attributes that the SDFD uses. I have assumed the easiest way to do this is by blocking actions (e.g., via threads, etc.). One could do it at the accessor level, but there is probably an overhead tradeoff. In any case, the SDFD defines those 'certain actions' as any that access the SDFD's data. > Whatever criteria you give (unless you have a really fantasic > scheme that I haven't considered), you'll find that the criteria > is also application to other actions used to synchronize threads. Sure. I would bet that in a lot of cases this would end up being punted back to the analyst by asking the analyst to colorize the actions that needed to be blocked for a given SDFD. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Need help conceptualizing Allen Theobald writes to shlaer-mellor-users: -------------------------------------------------------------------- Our recent discussion has prompted me to re-read the "Bridges and Wormholes" paper, because several things are still unclear to me. Please explain this sentence to me: "Depending on the properties of the particualar PIO domain employed in the system this functionality (obtaining the current value of a sensor-based attribute) may be provided within the lifecycle of some PIO object...or the functionality may be provided in the form of a synchronous service." Do this mean these two options are interchangeable? How are they different? Conceptually, do synchronous services just return domain "attributes"? And I'm still fuzzy on return coordinates? What exactly are they? How are they typically implemented? Kind Regards, Allen Theobald Nova Engineering, Inc. Subject: Re: (SMU) Need help conceptualizing lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Theobald... > "Depending on the properties of the particualar PIO domain employed > in the system this functionality (obtaining the current value of a > sensor-based attribute) may be provided within the lifecycle of > some PIO object...or the functionality may be provided in the form > of a synchronous service." > > Do this mean these two options are interchangeable? How are they > different? Conceptually, do synchronous services just return domain > "attributes"? They are interchangeable in the sense that either can be used. However, they are not interchangeable in a particular application. The PIO domain will determine which is appropriate. What this is saying is that some client wants the value of an attribute in a PIO object. There are two possibilities: (1) the value is located in an attribute of a PIO object instance. In this case a simple, synchronous attribute accessor can be used. This is synchronous because the value is immediately available from a data store. This is true even if some simple operation on it, such as a units conversion, is required. The client and the bridge can expect the value back post haste. So this can be done with a synchronous service. [The bridge might invoke a synchronous service from the PIO domain's external interface that, in turn, invokes the instance accessor. Or the bridge could simply invoke the accessor directly, since an accessor is, by definition, just a special kind of synchronous service.] (2) the value must be computed or updated in a manner that requires a state transition (i.e., a state action must be invoked). This is essentially asynchronous behavior, so the bridge must place the appropriate event on the queue to cause the transition (this is what 'provided by its life cycle means' means). It is asynchronous because neither the bridge nor the client can predict when that event will actually be processed. [The basic problem here is that when multiple requests are made, the order in which the responses are returned is not necessarily the same as the order in which the requests were issued because the delay between request and response is arbitrary.] > And I'm still fuzzy on return coordinates? What exactly are they? > How are they typically implemented? Isn't everybody? The 'return coordinates' is an abstraction. The abstraction simply represents some mechanism that allows an asynchronous response to be matched up with the original request. It is usually unnecessary in the synchronous case because there is typically a procession of nested calls that simply return to the right place. Where the mechanism must be implemented is in the asynchronous situation where the client domain issues a request event and, after an arbitrary delay, the service domain issues a response event. When the client receives the response event it must match up that particular event with the original request internally. Whatever is done to support this is abstracted in the notion of the 'return coordinates'. This is not very different than a referential attribute. A referential attribute is simply a place holder for some mechanism to keep track of which instances are associated. In the implementation that could be linked lists, smart pointers, etc. The only restriction on the mechanism is that it preserve the rules of association that one would have if the attribute really were an ASCII instance name. The notion of a 'coordinate' to which the response event returns in the client domain is simply an easy means of abstractly dealing with keeping track of a gaggle of requests and responses during a penny pushing simulation. One way to handle this is for the 'return coordinate' to be an index in a dynamic table maintained by the client's external interface. When the outgoing wormhole event is processed, the bridge creates an entry in the table that identifies what event should be placed on the queue when the response comes back. That index is passed to the service domain as part of the bridge. The service domain stores it in a similar way (though it has no idea of the index semantic -- it is simply a handle). When the service domain completes its thing and issues the response event to a wormhole, the service domain's external interface recovers the handle and forwards it back to the client with the response event. The client's external interface receives the event, does a table lookup on the index, adds a data packet as necessary from the service's response, pops that event on the queue, and deletes the table entry. Using my model of bridges (two interfaces and glue code) makes explaining what is going on easier. However, I think you can see that there is a lot of latitude at implementation time. For example, the client could simply pass the address of a callback routine in the service. The service domain's response wormhole invokes the callback and the callback places the right event on the client's queue. [I am not advocating this sort of shortcut for reasons I have expressed elsewhere, but it illustrates the freedom one has.] -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: (SMU) Modelling User Interaction Neil Earnshaw writes to shlaer-mellor-users: -------------------------------------------------------------------- Consider an editing application where an operator interactively creates, updates and deletes objects in an aplication domain. These objects have some constraints on their attributes, e.g. no duplicate identifier values are allowed and certain attributes must have non-null values assigned. Should the checking of these constraints and the feedback to the operator be modelled explicitly? My experience is that a huge number of very similar messages to an Editor terminator are produced, e.g. ED1:Duplicate widget name(name), ED2:Widget specified without name(), ED2:Widget created(id). This really clutters up your OCM, but it does bring out a lot of application behaviour that is otherwise hidden in identifier and data type definitions. Does anyone have any comments on the explicit vs the implicit approaches? Neil Earnshaw Object Software Engineers Ltd. 'archive.9911' -- Subject: Re: (SMU) Modelling User Interaction lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Earnshaw... > Consider an editing application where an operator interactively creates, > updates and deletes objects in an aplication domain. These objects have > some constraints on their attributes, e.g. no duplicate identifier > values are allowed and certain attributes must have non-null values > assigned. Should the checking of these constraints and the feedback to > the operator be modelled explicitly? I would go with a definite Sometimes. I would say that any constraint that is unique to the problem being solved and which is important to the subject matter should be explicitly modeled. Alas, this sort of definition is vague enough to provide little useful information. The problem is that this is basically a judgment call and that often depends upon experience, so you have to look at such issues on a case-by-case basis. Perhaps more importantly, there is an issue about *where* it would be modeled when it is explicitly handled. > My experience is that a huge number of very similar messages to an > Editor terminator are produced, e.g. ED1:Duplicate widget name(name), > ED2:Widget specified without name(), ED2:Widget created(id). This > really clutters up your OCM, but it does bring out a lot of application > behaviour that is otherwise hidden in identifier and data type > definitions. These errors strike me as cockpit errors. If the domain is properly modeled the user should not be making such requests. The user is probably either careless or unfamiliar with the application. Therefore I would not be inclined to handle them explicitly in an application domain. [There might be situations where the user cannot be expected to deal with interactions with other users/systems or even know the sate of the system, but I regard those as fairly exceptional in this context. More below.] Most likely, the closest I would let the ED1 error get to the application solution logic would be to do the checking in a bridge. (More specifically in the domain interface for the receiving domain.) The bridge would check that the target of the create event does not already exist and return the ED1 error if it does. [This can get architecturally tricky in the simultaneous view of time since another request might be in the process of creating the duplicate instance, but it might be easier to handle here than elsewhere.] The ED2 type of error strikes me as purely syntactic -- the user has probably failed to fill in some field or combination of fields in the GUI that are required. This should be dealt with long before it ever gets to the application proper. At another level, I was uncomfortable with your description of the problem. It implied a very direct relationship between the user and the model (e.g., "...an operator interactively creates, updates and deletes objects in an application domain."). The comings and goings of objects is a reflection of the Analyst's representation of what the user wants done. While we deliberately identify objects to be closely related to real world entities, they are still just a representation of the Analyst's solution so the user should have no idea that they exist per se. (I suspect you are not confused over this, but the phrasing makes me nervous.) For this reason I think it is important to think about a GUI (or more general UI) domain as an intermediary between the user and the application. As such it has a very different mission than the application domains. It serves to translate communications based upon display artifacts into communications based upon the artifacts of the Analyst's problem solution. This implies a partitioning of responsibilities between the GUI and the rest of the application. I would expect the mission of the GUI domain to be concerned with exactly the sorts of data entry checking that your examples describe -- more generally, with handling errors related specifically to user interaction. OTOH, I would not expect such a domain to be processing events related to the solution logic of the application. Thus I would expect that if such errors are processed explicitly, they would be processed in a GUI domain. If one has a GUI domain, then the solution logic application domains' OCMs would not be cluttered with messages related to user interaction and the GUI domain's OCM would not be cluttered with application logic events. > Does anyone have any comments on the explicit vs the implicit > approaches? We handle a lot of errors that syntactically violate specifications (e.g., requesting a voltage level that the hardware does not support) in realized code in an architectural domain. This error checking is invoked from the input bridges to the various domains. This Validation Domain has an IM to define data structures, but there are no active objects. Everything is handled in the bridge request, which is actually just an API to the realized code. It is especially handy for us because it helps us to run the same application logic domains for different hardware. FWIW, until now I really haven't thought much about the explicit/implicit tradeoff in processing errors, which accounts for my mealy-mouthed initial definition. So take this list with a tablespoon of salt... [Below I am thinking of errors that don't seem to be obvious candidates for a GUI domain.] Syntactic errors should probably be implicitly checked in bridges or realized code. These are errors that the user should know better than to make or that would be considered traditional data entry checks. For example, a range check on a data value might be performed by a low level, third party Window Manager package. Errors that can arise through no fault of the end user probably should be modeled explicitly in the application domains. These might depend upon interactions with external systems or with the current state of the application that a particular user cannot reasonably be expected to know about. Odds are they reflect something important about the application's functionality. Errors that violate general principles like the relational data model (ED1) should probably be implicitly checked. My thought here is that one should not distract from modeling the uniqueness of the application for the sake of being an Apostle of the Obvious for the general paradigm. If the error affects flow of control, it probably should be directly modeled. By affecting the flow of control I mean that the domain's normal processing will be different after the error. For example, an ATM machine without a Breathalizer might eject the ATM card after three failed attempts to enter the correct PIN. In that case, counting the failures should be in the domain, not the bridge because it is fundamental to the operation of the machine. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Need help conceptualizing Allen Theobald writes to shlaer-mellor-users: -------------------------------------------------------------------- This is a multi-part message in MIME format. --------------0B29E7B6B1664FE8E117A130 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Mr. Levkoff wrote: > Please see the November issue of Embedded Systems Programming for an example > of a naive architecture. All patterns included. Hello! Forget bridging and task-based event dispatching for a moment, because I would like to get the FSM, the ActiveInstance (which are both architectural?), and the Sample Object fully implemented. I love examples! The article does an excellent job of explaining the translation, BTW, but there are a few details missing. I've attached listing1, listing2, and listing4. If anyone has read the article and would like to help finish it up as a part of an intellectual exercise, well i'm all for that! My interest stems from (among other things) how closely the code parallels chapter 9 in "Object Lifecyles: Modeling the World in States". Kind Regards, Allen Theobald Nova Engineering, Inc. --------------0B29E7B6B1664FE8E117A130 Content-Type: application/x-unknown-content-type-cppfile; name="Listing1.cpp" Content-Transfer-Encoding: base64 Content-Disposition: inline; filename="Listing1.cpp" Ly8gRlNNLkgNCg0KLy8tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0gVHJhbnNpdGlv bg0KDQpzdHJ1Y3QgVHJhbnNpdGlvbkRhdGENCnsNCglFdmVudFR5cGUJdmFsaWRFdmVudDsN Cgl1aW50XzE2CQluZXh0U3RhdGU7DQp9Ow0KDQpjbGFzcyBUcmFuc2l0aW9uDQp7DQpwdWJs aWM6DQoJVHJhbnNpdGlvbih2b2lkKTsNCglUcmFuc2l0aW9uKGNvbnN0IFRyYW5zaXRpb25E YXRhKik7DQoJdm9pZCBXYWxrVHJhbnNpdGlvbkRhdGEodm9pZCk7DQoJdWludF8xNiBGaW5k KEV2ZW50VHlwZSk7DQoNCnByaXZhdGU6CS8vIFRyYW5zaXRpb25zIGFyZSBuZXZlciBkZXN0 cm95ZWQsIHNvIGRlZmF1bHQgZHRvciBpcyBPSw0KCWNvbnN0IFRyYW5zaXRpb25EYXRhKiB0 cmFuc2l0aW9uTGlzdDsNCglUcmFuc2l0aW9uKGNvbnN0IFRyYW5zaXRpb24mKTsNCn07DQoN Ci8vLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tIFN0YXRlDQoNCmNsYXNzIFN0YXRl DQp7DQpwdWJsaWM6DQoJU3RhdGUodm9pZCk7DQoJU3RhdGUoVHJhbnNpdGlvbiosIGNoYXIq LCBpbnQpOw0KCXVpbnRfMTYgVHJhbnNpdGlvbk9uRXZlbnQoRXZlbnRUeXBlKTsNCgl2b2lk IFdhbGtUcmFuc2l0aW9ucyh2b2lkKTsNCgl2b2lkIERpc3BsYXlOYW1lKHZvaWQpOw0KCWlu dCBHZXRTdGF0ZU51bWJlcih2b2lkKTsNCnByaXZhdGU6DQoJU3RhdGUoY29uc3QgU3RhdGUm KTsJLy8gU3RhdGVzIGFyZSBuZXZlciBkZXN0cm95ZWQsIHNvIGRlZmF1bHQgZHRvciBpcyBP Sw0KCVRyYW5zaXRpb24qCQlzdGF0ZVRyYW5zaXRpb247DQoJY2hhcioJCQluYW1lOw0KfTsN Cg0KdHlwZWRlZiBTdGF0ZSAgICpQU1RBVEU7DQp0eXBlZGVmIFBTVEFURSAgQVBTVEFURVtd Ow0KdHlwZWRlZiBBUFNUQVRFICpQQVBTVEFURTsNCg0KLy8tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0gQWN0aXZlSW5zdGFuY2UNCg0KY2xhc3MgQWN0aXZlSW5zdGFuY2UNCnsN CnB1YmxpYzoNCgl1aW50XzMyCQlzaWduYXR1cmU7CQkJLy8gZm9yIGxvY2F0aW5nIGRlbGV0 ZWQgb2JqZWN0cw0KCVBBUFNUQVRFCXN0YXRlTGlzdDsNCglTdGF0ZQkJKmN1cnJlbnRTdGF0 ZTsJCS8vIGZvciB0cmFuc2l0aW9ucyBhbmQgYWN0aW9ucw0KCWNoYXIJCXByZWZpeFtOVU1f UFJFRklYX0NIQVJTXTsNCglBY3RpdmVJbnN0YW5jZSh2b2lkKTsNCgl+QWN0aXZlSW5zdGFu Y2Uodm9pZCk7DQoJdmlydHVhbCBib29sIERvRXZlbnQobWVzc2FnZSopOwkvLyBhY3Qgb24g dGhlIGV2ZW50IGZyb20gdGhlIHF1ZXVlDQoJaW5saW5lIGludCBHZXRTdGF0ZSh2b2lkKTsJ CS8vIGdldCBzdGF0ZSBvZiBvYmplY3QgZm9yIEZpbmQgb25seQ0KCWlubGluZSB1aW50XzMy IEdldE15VGltZXJJRCh2b2lkKTsNCgl2b2lkIFNldFByZWZpeChjaGFyKiBpZFN0cmluZyk7 DQoJaW5saW5lIHZvaWQgU2V0U3RhdGVMaXN0KFBBUFNUQVRFIHN0YXRlTGlzdCk7DQoJYm9v bCBUYWtlRXZlbnQoCQkJCQkvLyB0YWtlIGFuIGV2ZW50IGZvciBxdWV1ZWluZw0KCQl1aW50 XzE2IGN1cnJlbnRFdmVudCwJCS8vIHRoZSBldmVudCB0byBzZW5kIChsaXN0ZWQgaW4gZXZl bnRzLmgpDQoJCUFjdGl2ZUluc3RhbmNlICpzb3VyY2UsCQkvLyB0aGUgc291cmNlIE9JRDsg aWYgPT0gdGFyZ2V0LCB1c2UgaGlnaCBwcmlvcml0eSBxdWV1ZQ0KCQl0TXNnRGF0YSBkYXRh MSwJCQkJLy8gNCBkYXRhIGVsZW1lbnRzIGFsbG93ZWQsDQoJCXRNc2dEYXRhIGRhdGEyPTAs CQkJLy8JCWF0IGxlYXN0IG9uZSBleHBlY3RlZA0KCQl0TXNnRGF0YSBkYXRhMz0wLA0KCQl0 TXNnRGF0YSBkYXRhND0wKTsNCgl1aW50XzMyIEdlbmVyYXRlQWZ0ZXIoCQkJLy8gc2FtZSBh cyBUYWtlRXZlbnQsIGV4Y2VwdCB0aGUgdGltZSBtYW5hZ2VyIHNlbmRzIGl0DQoJCXVpbnRf MTYgY3VycmVudEV2ZW50LA0KCQl1aW50XzE2IG1zZWMsDQoJCUFjdGl2ZUluc3RhbmNlICpz b3VyY2UsDQoJCXRNc2dEYXRhIGRhdGExPTAsDQoJCXRNc2dEYXRhIGRhdGEyPTAsDQoJCXRN c2dEYXRhIGRhdGEzPTAsDQoJCXRNc2dEYXRhIGRhdGE0PTApOw0KCXZvaWQgQ2FuY2VsVGlt ZXIodWludF8zMiBpSUQ9MCwgdWludF8xNiBldmVudD0wKTsNCglpbmxpbmUgdm9pZCBTZXRT dGF0ZShpbnQgbmV3U3RhdGUpOw0KDQpwcm90ZWN0ZWQ6DQoJdm9pZCBQcmludFByZWZpeCh2 b2lkKTsNCgljaGFyKiBHZXRQcmVmaXgodm9pZCk7DQoJdm9pZCBXYWxrU3RhdGVzKHZvaWQp Ow0KCWludAlzdGF0ZUlEOwkJCS8vIGZvciBGaW5kKCkNCgl1aW50XzMyIHN0YW1wKHZvaWQp Ow0KCXVpbnRfMzIgc3RhcnRUaW1lOw0KCXZvaWQgQWN0aXZlSW5zdGFuY2U6OkNoZWNrRHVy YXRpb24oDQoJCXVpbnRfMzIgc3RhcnRUaW1lLA0KCQl1aW50XzMyIHRhcmdldCwNCgkJdWlu dF8zMiB3aW5kb3csDQoJCXVpbnRfMzIgb3BlcmF0aW9uKTsNCg0KcHJpdmF0ZToNCglfcXVl dWVfaWQJcUlEOw0KCXVpbnRfMzIJCXRpbWVySUQ7DQoJRmxleExpc3QJKnBNc2dRdWV1ZTsN Cgl2aXJ0dWFsIHZvaWQgQ2xhc3NBY3Rpb24obWVzc2FnZSopOw0KfTsNCg0KDQovLyBGU00u Q1BQDQoNCmJvb2wgQWN0aXZlSW5zdGFuY2U6OlRha2VFdmVudCgNCgl1aW50XzE2IGN1cnJl bnRFdmVudCwNCglBY3RpdmVJbnN0YW5jZSAqc291cmNlLA0KCXRNc2dEYXRhIGRhdGExLA0K CXRNc2dEYXRhIGRhdGEyLA0KCXRNc2dEYXRhIGRhdGEzLA0KCXRNc2dEYXRhIGRhdGE0KQ0K ew0KCWRhdGFNZXNzYWdlKglwRXZlbnRNc2c7DQoNCgkvLyBUYXJnZXQgaW5zdGFuY2UgaGFz IGJlZW4gY2FsbGVkIHRvIGVucXVldWUgYSBtZXNzYWdlLg0KCS8vIElmIHdlIGFyZSBzZW5k aW5nIHRoaXMgdG8gb3Vyc2VsdmVzLCB1c2UgaGlnaCBwcmlvcml0eSBpbnNlcnRpb24uDQoJ cEV2ZW50TXNnID0gbmV3IGRhdGFNZXNzYWdlKGN1cnJlbnRFdmVudCwgdGhpcywgZGF0YTEs IGRhdGEyLCBkYXRhMywgZGF0YTQpOw0KCWlmICggc291cmNlICE9IHRoaXMgKQ0KCQlwRXZl bnRNc2ctPlNlbmRMb2NhbChwTXNnUXVldWUpOw0KCWVsc2UNCgkJcEV2ZW50TXNnLT5TZW5k TG9jYWxQcmlvcml0eShwTXNnUXVldWUpOw0KCXJldHVybiB0cnVlOw0KfQ0KDQp1aW50XzMy IEFjdGl2ZUluc3RhbmNlOjpHZW5lcmF0ZUFmdGVyKA0KCXVpbnRfMTYgY3VycmVudEV2ZW50 LA0KCXVpbnRfMTYgbXNlYywNCglBY3RpdmVJbnN0YW5jZSAqc291cmNlLA0KCXRNc2dEYXRh IGRhdGExLA0KCXRNc2dEYXRhIGRhdGEyLA0KCXRNc2dEYXRhIGRhdGEzLA0KCXRNc2dEYXRh IGRhdGE0KQ0Kew0KCW1lc3NhZ2UqCXBFdmVudE1zZzsNCglUTV9TVEFUVVMJdG1TdGF0dXM7 DQoJdWludF8zMgkJcmV0dXJuZWRUaW1lcjsNCg0KCS8vIFRhcmdldCBpbnN0YW5jZSBoYXMg YmVlbiBjYWxsZWQgdG8gc3RhcnQgYSB0aW1lci4NCgkvLyBUaW1lb3V0IGVucXVldWVzIHRo ZSBtZXNzYWdlLg0KCXBFdmVudE1zZyA9IG5ldyBtZXNzYWdlKGN1cnJlbnRFdmVudCwgdGhp cyk7DQoJcEV2ZW50TXNnLT5wU2VuZGVyID0gc291cmNlOw0KCXJldHVybmVkVGltZXIgPSB0 bV9zZXRfdGltZXIoDQoJCQkJCQlUTV9PTkVfU0hPVCwNCgkJCQkJCSh1bnNpZ25lZCBpbnQp KG1zZWMvVE1fSU5URVJWQUwpLA0KCQkJCQkJKHVpbnRfMzIpcEV2ZW50TXNnLA0KCQkJCQkJ MCwNCgkJCQkJCTAsDQoJCQkJCQkmdG1TdGF0dXMpOw0KCXRpbWVySUQgPSByZXR1cm5lZFRp bWVyOw0KCXJldHVybiByZXR1cm5lZFRpbWVyOw0KfQ0KDQpib29sIEFjdGl2ZUluc3RhbmNl OjpEb0V2ZW50KG1lc3NhZ2UqIHBNc2cpDQp7DQoJdWludF8xNiBuZXh0U3RhdGVJRDsNCglu ZXh0U3RhdGVJRCA9IGN1cnJlbnRTdGF0ZS0+VHJhbnNpdGlvbk9uRXZlbnQocE1zZy0+ZXZl bnQpOw0KDQoJLyoNCglJZiB0aGUgZXZlbnQgaXMgc3BlY2lmaWNhbGx5IGlnbm9yZWQgb3Ig aXMgZm91bmQNCglhbmQgdGhlcmUgaXMgYW4gYXNzb2NpYXRlZCBzdGF0ZSwgdGhpcyBpcyBz dWNjZXNzLg0KCUlmIHRoZSBldmVudCBpcyBub3QgZm91bmQsIGl0IGlzIGFuIGVycm9yLg0K CSovDQoJaWYgKCBuZXh0U3RhdGVJRCAhPSBJR05PUkVfRVZFTlQgKSB7DQoJCWlmICggbmV4 dFN0YXRlSUQgIT0gRVZFTlRfTk9UX0ZPVU5EICkgew0KCQkJc3RhdGVJRCA9IG5leHRTdGF0 ZUlEOw0KCQkJY3VycmVudFN0YXRlID0gKCpzdGF0ZUxpc3QpW3N0YXRlSURdOw0KCQkJUHJp bnRQcmVmaXgoKTsNCgkJCWN1cnJlbnRTdGF0ZS0+RGlzcGxheU5hbWUoKTsNCgkJCUNsYXNz QWN0aW9uKHBNc2cpOw0KCQkJcmV0dXJuIHRydWU7DQoJCX0gZWxzZSB7DQoJCQlwcmludGYo IlRyYW5zaXRpb24gb24gZXZlbnQgZGlkbid0IHdvcmsuICBUYXJnZXQgcHJlZml4OiAlcyBz dGF0ZTogJWxkXG4iLCANCgkJCQlwcmVmaXgsICh1aW50XzMyKXN0YXRlSUQpOw0KCQkJcmV0 dXJuIGZhbHNlOw0KCQl9DQoJfSBlbHNlDQoJCXJldHVybiB0cnVlOw0KfQ0KDQo= --------------0B29E7B6B1664FE8E117A130 Content-Type: application/x-unknown-content-type-cppfile; name="Listing2.cpp" Content-Transfer-Encoding: base64 Content-Disposition: inline; filename="Listing2.cpp" LyogJERFRklORVMgLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLSAq Lw0KI2RlZmluZSBNT1ZFX1RPX0VMRUNUUk9ERVNfU1RBVEUJMA0KI2RlZmluZSBCRUdJTl9N RUFTVVJFTUVOVF9TVEFURQkJMQ0KI2RlZmluZSBFVkFDVUFURV9TVEFURQkJCQkyDQojZGVm aW5lIERFTEVURV9TVEFURQkJCQkzDQoNCi8qICRNRVRIT0RfRVhURVJOUyAtLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0gKi8NCnZvaWQgTW92ZVRvRWxlY3Ryb2Rlc0FjdGlv bihtZXNzYWdlKiBldmVudERhdGEpOw0Kdm9pZCBCZWdpbk1lYXN1cmVtZW50QWN0aW9uKG1l c3NhZ2UqIGV2ZW50RGF0YSk7DQp2b2lkIEV2YWN1YXRlQWN0aW9uKG1lc3NhZ2UqIGV2ZW50 RGF0YSk7DQp2b2lkIERlbGV0ZUFjdGlvbihtZXNzYWdlKiBldmVudERhdGEpOw0KDQovKiAk QVRUUklCX0VYVEVSTlMgLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tICovDQpz dGF0aWMgU3RhdGUgTW92ZVRvRWxlY3Ryb2RlczsNCnN0YXRpYyBTdGF0ZSBCZWdpbk1lYXN1 cmVtZW50Ow0Kc3RhdGljIFN0YXRlIEV2YWN1YXRlOw0Kc3RhdGljIFN0YXRlIERlbGV0ZTsN Cg0KLyogJFRSQU5TSVRJT05fREFUQSAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LSAqLw0Kc3RhdGljIGNvbnN0IFRyYW5zaXRpb25EYXRhIE1vdmVUb0VsZWN0cm9kZXNUW10g PQ0Kew0KCXtTTVAyX0F0RWxlY3Ryb2RlcywJCQkJQkVHSU5fTUVBU1VSRU1FTlRfU1RBVEV9 LA0KCXswLAkJCQkJCQkJMH0sDQp9Ow0KDQpzdGF0aWMgY29uc3QgVHJhbnNpdGlvbkRhdGEg QmVnaW5NZWFzdXJlbWVudFRbXSA9DQp7DQoJe1NNUDJfTWVhc3VyZW1lbnRDb21wbGV0ZSwJ CUVWQUNVQVRFX1NUQVRFfSwNCgl7MCwJCQkJCQkJCTB9LA0KfTsNCg0Kc3RhdGljIGNvbnN0 IFRyYW5zaXRpb25EYXRhIEV2YWN1YXRlVFtdID0NCnsNCgl7U01QNF9FbXB0aWVkLAkJCQkJ REVMRVRFX1NUQVRFfSwNCgl7MCwJCQkJCQkJCTB9LA0KfTsNCg0Kc3RhdGljIGNvbnN0IFRy YW5zaXRpb25EYXRhIERlbGV0ZVRbXSA9DQp7DQoJezAsCQkJCQkJCQkwfSwNCn07DQoNCi8q ICRUUkFOU0lUSU9OUyAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0gKi8N CnN0YXRpYyBUcmFuc2l0aW9uIE1vdmVUb0VsZWN0cm9kZXNUcmFuc2l0aW9ucyhNb3ZlVG9F bGVjdHJvZGVzVCk7DQpzdGF0aWMgVHJhbnNpdGlvbiBCZWdpbk1lYXN1cmVtZW50VHJhbnNp dGlvbnMoQmVnaW5NZWFzdXJlbWVudFQpOw0Kc3RhdGljIFRyYW5zaXRpb24gRXZhY3VhdGVU cmFuc2l0aW9ucyhFdmFjdWF0ZVQpOw0Kc3RhdGljIFRyYW5zaXRpb24gRGVsZXRlVHJhbnNp dGlvbnMoRGVsZXRlVCk7DQoNCi8qICRTVEFURVMgLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0gKi8NClN0YXRlIFNhbXBsZTo6TW92ZVRvRWxlY3Ryb2Rlcygm TW92ZVRvRWxlY3Ryb2Rlc1RyYW5zaXRpb25zLCAiTW92ZVRvRWxlY3Ryb2RlcyIpOw0KU3Rh dGUgU2FtcGxlOjpCZWdpbk1lYXN1cmVtZW50KCZCZWdpbk1lYXN1cmVtZW50VHJhbnNpdGlv bnMsICJCZWdpbk1lYXN1cmVtZW50Iik7DQpTdGF0ZSBTYW1wbGU6OkV2YWN1YXRlKCZFdmFj dWF0ZVRyYW5zaXRpb25zLCAiRXZhY3VhdGUiKTsNClN0YXRlIFNhbXBsZTo6RGVsZXRlKCZE ZWxldGVUcmFuc2l0aW9ucywgIkRlbGV0ZSIpOw0KDQovKiAkU1RBVEVfTElTVCAtLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tICovDQpBUFNUQVRFIFNhbXBsZTo6U2Ft cGxlU3RhdGVzID0gew0KCSZTYW1wbGU6Ok1vdmVUb0VsZWN0cm9kZXMsDQoJJlNhbXBsZTo6 QmVnaW5NZWFzdXJlbWVudCwNCgkmU2FtcGxlOjpFdmFjdWF0ZSwNCgkmU2FtcGxlOjpEZWxl dGUsDQoJKFBTVEFURSkwDQp9Ow0KDQovKiAkQUNUSU9OIC0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tICovDQp2b2lkIFNhbXBsZTo6Q2xhc3NBY3Rpb24obWVz c2FnZSAqZXZlbnREYXRhKQ0Kew0KCXN3aXRjaCggR2V0U3RhdGUoKSApDQoJew0KCWNhc2Ug TU9WRV9UT19FTEVDVFJPREVTX1NUQVRFOg0KCQlNb3ZlVG9FbGVjdHJvZGVzQWN0aW9uKGV2 ZW50RGF0YSk7DQoJCWJyZWFrOw0KDQoJY2FzZSBCRUdJTl9NRUFTVVJFTUVOVF9TVEFURToN CgkJQmVnaW5NZWFzdXJlbWVudEFjdGlvbihldmVudERhdGEpOw0KCQlicmVhazsNCg0KCWNh c2UgRVZBQ1VBVEVfU1RBVEU6DQoJCUV2YWN1YXRlQWN0aW9uKGV2ZW50RGF0YSk7DQoJCWJy ZWFrOw0KDQoJY2FzZSBERUxFVEVfU1RBVEU6DQoJCURlbGV0ZUFjdGlvbihldmVudERhdGEp Ow0KCQlicmVhazsNCgl9DQp9DQoNCnZvaWQgU2FtcGxlOjpNb3ZlVG9FbGVjdHJvZGVzQWN0 aW9uKG1lc3NhZ2UqIGV2ZW50RGF0YSkNCnsNCgkvLyByZXF1aXJlcyBwb3B1bGF0aW9uIGJh c2VkIG9uIHBzZXVkb2NvZGUgaW4gbW9kZWwNCn0NCg0Kdm9pZCBTYW1wbGU6OkJlZ2luTWVh c3VyZW1lbnRBY3Rpb24obWVzc2FnZSogZXZlbnREYXRhKQ0Kew0KCS8vIHJlcXVpcmVzIHBv cHVsYXRpb24gYmFzZWQgb24gcHNldWRvY29kZSBpbiBtb2RlbA0KfQ0KDQp2b2lkIFNhbXBs ZTo6RXZhY3VhdGVBY3Rpb24obWVzc2FnZSogZXZlbnREYXRhKQ0Kew0KCS8vIHJlcXVpcmVz IHBvcHVsYXRpb24gYmFzZWQgb24gcHNldWRvY29kZSBpbiBtb2RlbA0KfQ0KDQp2b2lkIFNh bXBsZTo6RGVsZXRlQWN0aW9uKG1lc3NhZ2UqIGV2ZW50RGF0YSkNCnsNCgkvLyByZXF1aXJl cyBwb3B1bGF0aW9uIGJhc2VkIG9uIHBzZXVkb2NvZGUgaW4gbW9kZWwNCn0NCg0KU2FtcGxl OjpTYW1wbGUoY2hhciogaWRTdHJpbmcsIEZsdWlkUGF0aCogY3JlYXRvclBhdGgpDQp7DQoJ U2V0U3RhdGVMaXN0KCZTYW1wbGVTdGF0ZXMpOw0KCVNldFByZWZpeChpZFN0cmluZyk7DQoJ bXlGbHVpZFBhdGggPSBjcmVhdG9yUGF0aDsNCglDbGFzc0FjdGlvbiggKG1lc3NhZ2UqKShw UmV0dXJuRXYpICk7DQp9DQoNClNhbXBsZTo6flNhbXBsZSgpDQp7DQp9DQoNClNhbXBsZTo6 VGFrZUFuZENyZWF0ZShjaGFyKiBpZFN0cmluZywgRmx1aWRQYXRoKiBzYW1wbGVQYXRoKQ0K ew0KCVNhbXBsZUNyZWF0ZUV2ZW50KiBwQ3JlYXRlRXZlbnQ7DQoJcENyZWF0ZUV2ZW50ID0g bmV3IFNhbXBsZUNyZWF0ZUV2ZW50KGlkU3RyaW5nLCBzYW1wbGVQYXRoKTsNCglwQ3JlYXRl RXZlbnQtPkVucXVldWUoKTsNCn0NCg0KU2FtcGxlQ3JlYXRlRXZlbnQ6OlNhbXBsZUNyZWF0 ZUV2ZW50KGNoYXIqIGlkU3RyaW5nLCBGbHVpZFBhdGgqIGNyZWF0b3JQYXRoKQ0KCTogZGF0 YU1lc3NhZ2UoMCwwLDAsKHRNc2dEYXRhKWlkU3RyaW5nLCh0TXNnRGF0YSljcmVhdG9yUGF0 aCkNCnsNCn0NCg0KU2FtcGxlQ3JlYXRlRXZlbnQ6OmRpc3BhdGNoKHZvaWQpDQp7DQoJbmV3 IFNhbXBsZSggKGNoYXIqKWRhdGEyLCAoRmx1aWRQYXRoKilkYXRhMyApOw0KCWRlbGV0ZSB0 aGlzOw0KfQ0K --------------0B29E7B6B1664FE8E117A130 Content-Type: application/x-unknown-content-type-cppfile; name="Listing4.cpp" Content-Transfer-Encoding: base64 Content-Disposition: inline; filename="Listing4.cpp" DQpjbGFzcyBtZXNzYWdlDQp7DQpwdWJsaWM6DQoJbWVzc2FnZSh1aW50XzE2IGV2ZW50LCBB Y3RpdmVJbnN0YW5jZSogcFRhcmdldCkge307DQoJdmlydHVhbCB2b2lkIGRpc3BhdGNoKHZv aWQpOw0KCXZvaWQgRW5xdWV1ZSh2b2lkKTsNCglBY3RpdmVJbnN0YW5jZSogcEFJOwkJLy8g ZGVzdGluYXRpb24gb2JqZWN0DQoJQWN0aXZlSW5zdGFuY2UqIHBTZW5kZXI7CS8vIHNlbmRl ciBvYmplY3QNCgkvLyBhbGxvY2F0ZSBtZW1vcnkgb3IgbWVzc2FnZXMgZnJvbSBNUVggYnVm ZmVyIHBvb2xzDQoJLy92b2lkICogb3BlcmF0b3IgbmV3KHNpemVfdCk7DQoJLy92b2lkIG9w ZXJhdG9yIGRlbGV0ZSh2b2lkKik7DQoJdWludF8xNiBldmVudDsNCnByb3RlY3RlZDoNCgl1 aW50XzE2IG1zZ1R5cGU7DQp9Ow0KDQpjbGFzcyBkYXRhTWVzc2FnZSA6IHB1YmxpYyBtZXNz YWdlDQp7DQpwdWJsaWM6DQoJZGF0YU1lc3NhZ2UodWludF8xNiBldmVudCwgQWN0aXZlSW5z dGFuY2UgKnBUYXJnZXQsIA0KCQl1aW50XzMyIGRhdGFBcmcxLCB1aW50XzMyIGRhdGFBcmcy PTAsIHVpbnRfMzIgZGF0YUFyZzM9MCwgdWludF8zMiBkYXRhQXJnND0wKSA6IG1lc3NhZ2Uo ZXZlbnQsIHBUYXJnZXQpIHt9Ow0KCXRNc2dEYXRhIGRhdGExOw0KCXRNc2dEYXRhIGRhdGEy Ow0KCXRNc2dEYXRhIGRhdGEzOw0KCXRNc2dEYXRhIGRhdGE0Ow0KfTsNCg0KaW5saW5lIHZv aWQgbWVzc2FnZTo6ZGlzcGF0Y2godm9pZCkNCnsNCglib29sIGJPSyA9IChwQUkpLT5Eb0V2 ZW50KHRoaXMpOw0KCWlmICggIWJPSyApDQoJCTsJLy8gbG9nIHNvbWUgZXJyb3INCglkZWxl dGUgdGhpczsNCn0NCg== --------------0B29E7B6B1664FE8E117A130-- Subject: Re: (SMU) Need help conceptualizing lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Levkoff... > Please see the November issue of Embedded Systems Programming for an example > of a naive architecture. All patterns included. I finally got my copy -- it took awhile to get through the forwarding since they moved our division. It would also help if they had thought to provide a mail room clerk at the new location. But I digress... It was a very good article. I particularly liked the way you made the point that application independence is the dual of implementation independence. I just wish it had been in a place with wider exposure than ESP. It might have brought an epiphany to some of the elaborationists to realize that instead of applying their low level design patterns over and over to every application they could do it just once if they stopped to think a little more about what they were actually modeling. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: RE: (SMU) Need help conceptualizing "Levkoff, Bruce" writes to shlaer-mellor-users: -------------------------------------------------------------------- Alas, Time Magazine rejected it. Glad you enjoyed it. Bruce -----Original Message----- From: lahman [mailto:lahman@ATB.Teradyne.COM] Sent: Thursday, November 04, 1999 4:24 PM To: shlaer-mellor-users@projtech.com Subject: Re: (SMU) Need help conceptualizing lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- I just wish it had been in a place with wider exposure than ESP. It might have brought an epiphany to some of the elaborationists to realize that instead of applying their low level design patterns over and over to every application they could do it just once if they stopped to think a little more about what they were actually modeling. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Need help conceptualizing lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Theobald... > Forget bridging and task-based event dispatching for a moment, > because I would like to get the FSM, the ActiveInstance (which > are both architectural?), and the Sample Object fully > implemented. Yes, they are architectural. The interesting thing to note is the subject matter. They are modeling the artifacts of the implementation. This is much clearer in Lekoff's model than in OL:MWS because there is more detail (e.g., Transition objects). > I love examples! The article does an excellent job > of explaining the translation, BTW, but there are a few details > missing. > > I've attached listing1, listing2, and listing4. If anyone has > read the article and would like to help finish it up as a > part of an intellectual exercise, well i'm all for that! I'll pass on attacking the details; it would tarnish my megathinker facade. Also, I have been a Process Guy now for a couple of years and they'd take away my new union card if I actually delved into code. Besides, I am starting to mix up the syntax from various languages and that confuses people. > My interest stems from (among other things) how closely the code > parallels chapter 9 in "Object Lifecyles: Modeling the World in > States". That was probably pretty intentional. B-) Both are striving for a generic approach to model the invariants. [A nice phrase that Steve Mellor used on OTUG awhile back.] They are applying the same sort of abstract analysis principles to the Architecture that would be applied to the application. Modeling invariants is particularly relevant in the Architecture because many people (i.e., all elaborationists) do not see the implementation in terms of high level, reusable abstractions. They don't make the leap from the mire of protected variables, overloaded operators, or semaphore patterns to the place where all that an be collected in a Big Honking Design Pattern. Awhile back I described a Poor Man's Architecture as one extreme among architectures. By the standard of Levkoff's model it is very primitive. But recall that I said it could be grown by doing things like inheriting from a base class. Voila, ActiveInstance! But that is kind of the wrong way to develop a reusable architecture because it involves a lot of painful trial and error. Like basic, it is OK for a learning experience but you don't want to do a lot of real stuff with it that you will have to live with for awhile. As I also suggested at the time, if you want a general architecture to service many applications and support fully automated code generation (the other extreme of the architectural spectrum), you need to build an OOA-of-Architecture or similar meta model beforehand. When you do that, good OOA practice will likely lead you down a similar path as Levkoff and OL:MWS followed. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: RE: (SMU) Need help conceptualizing Daniel Dearing writes to shlaer-mellor-users: -------------------------------------------------------------------- Bruce, You've got me all frustrated now, since I live in England, it seems I am only permitted access to the ESP Europe magazine. I waited patiently for the November edition and was bitterly disappointed to find that your architecture was not in it! I looked on the ESP web site, and although the article is mentioned, it seems there is no way to download it. Could I perhaps trouble you for a copy? (I would be most grateful) Best Regards, Daniel Dearing Daniel Dearing Senior Project Consultant ============================== Plextek Limited London Road Great Chesterford Essex CB10 1NY ============================== Tel +44 (0) 1799 533312 Subject: Re: (SMU) Need help conceptualizing lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Levkoff... > Alas, Time Magazine rejected it. How proletarian of them. Perhaps the New Yorker? -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Need help conceptualizing Allen Theobald writes to shlaer-mellor-users: -------------------------------------------------------------------- Daniel Dearing wrote: > Could I perhaps trouble you for a copy? (I would be most grateful) We have extra copies of ESP at our company. I could either fax you a copy of the article, or mail you the whole magazine! Take your pick. :^) I don't believe its available electronically, is it? Bruce? I do know its a must read. Kind Regards, Allen Subject: Re: (SMU) Need help conceptualizing Allen Theobald writes to shlaer-mellor-users: -------------------------------------------------------------------- > I'll pass on attacking the details; it would tarnish my > megathinker facade. :^) That's o.k. I think I have a handle on it now. This whole thread, David Whipp's code, the point-counter-point discussion, and finally Bruce's article have brought many things about S-M into focus for me. I feel I have made a quantum leap forward in my understanding. Many thanks to everyone who contributed to this thread. Kind Regards, Allen Theobald Nova Engineering, Inc. Cincinnati, OH Subject: (SMU) ESP Article "Dana Simonson" writes to shlaer-mellor-users: -------------------------------------------------------------------- Try 4 (after un- and re- subscribing): >"Levkoff, Bruce" writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >Please see the November issue of Embedded Systems Programming for an example >of a naive architecture. All patterns included. Speaking of which, this is the first time I've seen HOLD as an option in dealing with events. I only remember CAN'T HAPPEN and IGNORE from the PT class and books. Is the concept of kicking an event back saying "don't bother me now, I'm busy" in common use, or is it something that you invented? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Dana Simonson Section Manager, Engineering Waseca Operations Center E. F. Johnson Company dsimonson@transcrypt.com www.transcrypt.com Subject: RE: (SMU) ESP Article "Levkoff, Bruce" writes to shlaer-mellor-users: -------------------------------------------------------------------- I believe I first saw the HOLD event capability in docs from Kennedy-Carter. Bruce -----Original Message----- From: Dana Simonson [mailto:DSimonson@efjohnson.com] Sent: Friday, November 05, 1999 11:45 AM To: shlaer-mellor-users@projtech.com Subject: (SMU) ESP Article "Dana Simonson" writes to shlaer-mellor-users: -------------------------------------------------------------------- Try 4 (after un- and re- subscribing): >"Levkoff, Bruce" writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >Please see the November issue of Embedded Systems Programming for an example >of a naive architecture. All patterns included. Speaking of which, this is the first time I've seen HOLD as an option in dealing with events. I only remember CAN'T HAPPEN and IGNORE from the PT class and books. Is the concept of kicking an event back saying "don't bother me now, I'm busy" in common use, or is it something that you invented? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Dana Simonson Section Manager, Engineering Waseca Operations Center E. F. Johnson Company dsimonson@transcrypt.com www.transcrypt.com Subject: Re: (SMU) ESP Article Carolyn Duby writes to shlaer-mellor-users: -------------------------------------------------------------------- >Speaking of which, this is the first time I've seen HOLD as an option in >dealing with events. I only remember CAN'T HAPPEN and IGNORE from the PT >class and books. Is the concept of kicking an event back saying "don't >bother me now, I'm busy" in common use, or is it something that you invented? In UML the analyst may mark a set of events to be deferred within a state. A deferred event is saved until the active object enters a state where the event is not deferred. See page 249 of the UML Reference Manual and pages 292 and 297 of the UML User Guide for more information. Carolyn Subject: Re: (SMU) ESP Article lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Simonson... > Speaking of which, this is the first time I've seen HOLD as an option in dealing with events. I only remember CAN'T HAPPEN and IGNORE from the PT class and books. Is the concept of kicking an event back saying "don't bother me now, I'm busy" in common use, or is it something that you invented? We don't use them so this is pure speculation, but I can think of two possibilities. One is to deal with the same problems that OOA96 solved by prioritizing self-directed events (i.e., an alternative to re-ordering the queue). The other is to handle blocking, particularly in the simultaneous view of time. In both cases, though, the HOLD would be temporary rather than the intrinsic STT entry of, say, IGNORE. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: RE: (SMU) ESP Article "Dana Simonson" writes to shlaer-mellor-users: -------------------------------------------------------------------- I pulled OOA97 & KC did indeed define it there. -Thanks >>> "Levkoff, Bruce" 11/05 11:02 AM >>> "Levkoff, Bruce" writes to shlaer-mellor-users: -------------------------------------------------------------------- I believe I first saw the HOLD event capability in docs from Kennedy-Carter. Bruce -----Original Message----- From: Dana Simonson [mailto:DSimonson@efjohnson.com] Sent: Friday, November 05, 1999 11:45 AM To: shlaer-mellor-users@projtech.com Subject: (SMU) ESP Article "Dana Simonson" writes to shlaer-mellor-users: -------------------------------------------------------------------- Try 4 (after un- and re- subscribing): >"Levkoff, Bruce" writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >Please see the November issue of Embedded Systems Programming for an example >of a naive architecture. All patterns included. Speaking of which, this is the first time I've seen HOLD as an option in dealing with events. I only remember CAN'T HAPPEN and IGNORE from the PT class and books. Is the concept of kicking an event back saying "don't bother me now, I'm busy" in common use, or is it something that you invented? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Dana Simonson Section Manager, Engineering Waseca Operations Center E. F. Johnson Company dsimonson@transcrypt.com www.transcrypt.com Subject: RE: (SMU) ESP Article "Peter J. Fontana" writes to shlaer-mellor-users: -------------------------------------------------------------------- > -----Original Message----- > From: owner-shlaer-mellor-users@projtech.com > lahman writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Responding to Simonson... > In both cases, though, the HOLD would be temporary > rather than the intrinsic STT entry of, say, IGNORE. Actually it *is* an STT entry, exactly like IGNORE. Subject: Re: (SMU) ESP Article Bary D Hogan writes to shlaer-mellor-users: -------------------------------------------------------------------- I think HOLD was created for cases in which you don't want to ignore an event, but you are in a part of the state model which can't deal with it yet. Remember the ODMS problem from the training class? The Robot state model couldn't accept a "Robot Request" event while it was in the middle of a disk transfer. The request had to be stored down as an instance of an object, so the robot could look for any outstanding requests when it was finished. The HOLD effect could simplify situations like this somewhat. OOA97 cites a couple of other examples also. BTW, we don't use them here either (not yet anyway). We considered it, but there hasn't been any overwhelming need to date. Bary Hogan Lockheed Martin Tactical Aircraft Systems Fort Worth, TX bary.d.hogan@lmco.com (817) 763-2620 > > lahman writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > > We don't use them so this is pure speculation, but I can think of two possibilities. One is to deal with the same problems that OOA96 solved by prioritizing self-directed events (i.e., an alternative to re-ordering the queue). The other is to handle blocking, particularly in the simultaneous view > of time. In both cases, though, the HOLD would be temporary rather than the intrinsic STT entry of, say, IGNORE. > Subject: Re: RE: (SMU) ESP Article "Neal Welland" writes to shlaer-mellor-users: -------------------------------------------------------------------- SMU, >From my somewhat dim recollection, the HOLD mechanism was introduced due to some pressure from within the SM user community, particularly the telecomms fraternity. At the time (early 96), we were attempting to migrate to SM, but when analysing the functionality of our existing system, it seemed there was the absolute need for an ability to "prioritise" the set of possible events that could be processed next. We referred to this as "state machine collapse". Part of the problem seemed to be centred around the fact that SM uses Moore state models. Our application seemed better suited to a Mealy state model. I wasn't personally involved, but I can only assume that KC agreed with the analysis and introduced it to "their" version of SM. As this is an outsiders view of things, maybe somebody at KC could tell us if this is an accurate description of historical events? Regards, Neal. "Dana Simonson" on 05/11/99 20:27:19 Please respond to shlaer-mellor-users@projtech.com To: shlaer-mellor-users@projtech.com cc: (bcc: Neal Welland/MAIN/MC1) Subject: Re: RE: (SMU) ESP Article "Dana Simonson" writes to shlaer-mellor-users: -------------------------------------------------------------------- I pulled OOA97 & KC did indeed define it there. -Thanks >>> "Levkoff, Bruce" 11/05 11:02 AM >>> "Levkoff, Bruce" writes to shlaer-mellor-users: -------------------------------------------------------------------- I believe I first saw the HOLD event capability in docs from Kennedy-Carter. Bruce -----Original Message----- From: Dana Simonson [mailto:DSimonson@efjohnson.com] Sent: Friday, November 05, 1999 11:45 AM To: shlaer-mellor-users@projtech.com Subject: (SMU) ESP Article "Dana Simonson" writes to shlaer-mellor-users: -------------------------------------------------------------------- Try 4 (after un- and re- subscribing): >"Levkoff, Bruce" writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >Please see the November issue of Embedded Systems Programming for an example >of a naive architecture. All patterns included. Speaking of which, this is the first time I've seen HOLD as an option in dealing with events. I only remember CAN'T HAPPEN and IGNORE from the PT class and books. Is the concept of kicking an event back saying "don't bother me now, I'm busy" in common use, or is it something that you invented? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Dana Simonson Section Manager, Engineering Waseca Operations Center E. F. Johnson Company dsimonson@transcrypt.com www.transcrypt.com Subject: Re: (SMU) ESP Article "David Harris" writes to shlaer-mellor-users: -------------------------------------------------------------------- As users of the KC tool set we have had the ability to HOLD events for some time. While they are useful in allowing the STD to be simplified there are also traps awaiting. EG if the events that will cause a transition from a given state are not guaranteed to arrive, then holding other events in that state can result in your STD becoming locked, so use with care! Dave Harris Bary D Hogan wrote: > Bary D Hogan writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > I think HOLD was created for cases in which you don't want to ignore > an event, but you are in a part of the state model which can't deal > with it yet. > > Remember the ODMS problem from the training class? The Robot state > model couldn't accept a "Robot Request" event while it was in the > middle of a disk transfer. The request had to be stored down as an > instance of an object, so the robot could look for any outstanding > requests when it was finished. The HOLD effect could simplify > situations like this somewhat. OOA97 cites a couple of other > examples also. > > BTW, we don't use them here either (not yet anyway). We considered > it, but there hasn't been any overwhelming need to date. > > Bary Hogan > > Lockheed Martin Tactical Aircraft Systems > Fort Worth, TX > > bary.d.hogan@lmco.com > (817) 763-2620 > > > > > lahman writes to shlaer-mellor-users: > > -------------------------------------------------------------------- > > > > > > We don't use them so this is pure speculation, but I can think of > two possibilities. One is to deal with the same problems that OOA96 > solved by prioritizing self-directed events (i.e., an alternative to > re-ordering the queue). The other is to handle blocking, > particularly in the simultaneous view > > of time. In both cases, though, the HOLD would be temporary rather > than the intrinsic STT entry of, say, IGNORE. > > Subject: Re: (SMU) ESP Article lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Fontana... > > In both cases, though, the HOLD would be temporary > > rather than the intrinsic STT entry of, say, IGNORE. > > Actually it *is* an STT entry, exactly like IGNORE. Yes, but my point was that in so doing the nature of the STT has changed from static to dynamic because that cell of the STT is now multivalued. If one simply places HOLD in the cell that is, at best, misleading because it leaves out crucial information about the state where the transition goes when the HOLD is not in effect and about the nature of the conditions under which the HOLD applies. If one is going to use HOLD in an STT, then there has to be some other notational kludge to supply this information. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) ESP Article lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Welland... > >From my somewhat dim recollection, the HOLD mechanism was introduced due to some > pressure from within the SM user community, particularly the telecomms > fraternity. At the time (early 96), we were attempting to migrate to SM, but > when analysing the functionality of our existing system, it seemed there was the > absolute need for an ability to "prioritise" the set of possible events that > could be processed next. We referred to this as "state machine collapse". Part > of the problem seemed to be centred around the fact that SM uses Moore state > models. Our application seemed better suited to a Mealy state model. I am curious if you have a specific example. I have never personally encountered a situation where it (Mealy vs, Moore) made a difference. While the specific states and transitions might be different under each approach, I would think they are convertible so that there would not be an 'absolute need' to prioritize (other than self directed events) -- at least because of the model used. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) ESP Article ian@kc.com (Ian Wilkie) writes to shlaer-mellor-users: -------------------------------------------------------------------- > "Neal Welland" wrote to shlaer-mellor-users: > -------------------------------------------------------------------- > > SMU, > > From my somewhat dim recollection, the HOLD mechanism was introduced due to some > pressure from within the SM user community, particularly the telecomms > fraternity. At the time (early 96), we were attempting to migrate to SM, but > when analysing the functionality of our existing system, it seemed there was the > absolute need for an ability to "prioritise" the set of possible events that > could be processed next. We referred to this as "state machine collapse". Part > of the problem seemed to be centred around the fact that SM uses Moore state > models. Our application seemed better suited to a Mealy state model. > > I wasn't personally involved, but I can only assume that KC agreed with the > analysis and introduced it to "their" version of SM. As this is an outsiders > view of things, maybe somebody at KC could tell us if this is an accurate > description of historical events? Yes, we introduced the HOLD event while working on the project to which you refer. IIRC, the issue prompting the introduction of HOLD was not so much Mealy vs Moore as the recognition that a very common situation in Telecomms is the arrival of a request to do something when you are not yet ready to deal with it. Simple event queuing is not enough because the state machine in charge of the request must continue to accept events from (typically) the switch fabric hardware in order to complete the previous request. As Bary Hogan observed, this is precisely what happens in the ODMS case study with the Robot. In that case, the problem is not too bad since there is only one such state model (unless you count the disk request assigner). However, we observed domains where many state models would have had to have had identical patterns of behaviour. This would have meant the analyst specifying time and time again the same queueing semantics. The problem with that is, in our view, that this is redundant work, obscures the main behavour of the domain and introduces more scope for error. (It is interesting to note that the first versions of the ODMS case study solution contained a bug in this area whereby the Robot would fail to notice a new request under some circumstances). Of course, the downside if this is that architectures have to be more sophisticated to support this idea. In addition, "synchronous" architectures have to become asynchronous at this point. However, as Steve Mellor has observed, this is not really a problem since a HOLD effect can always be translated by adding an additional object to act as the queue. Carolyn Duby wrotes to shlaer-mellor-users: -------------------------------------------------------------------- > In UML the analyst may mark a set of events to be deferred within a state. > A deferred event is saved until the active object enters a state where the > event is not deferred. See page 249 of the UML Reference Manual and pages > 292 and 297 of the UML User Guide for more information. Yes, this is true. However, the mechanism is slightly different from the OOA97 HOLD effect. In OOA97, HOLD is considered to be an intrinsic part of the event processing cycle for the state machine. In UML, there is actually a "defer" action, which the user puts as a response to the event. A clear mapping from OOA97 HOLD to UML defer can be found - Each OOA97 hold effect becomes an "internal" transition (i.e. to the same state) with a single transition action which is "defer". However, UML implicitly allows more freedom (or rope if you prefer). For example, you could have an action sequence on a transition which does a few operations then "defers" the event. Not only is model questionable, but the semantics of the defer are difficult, since the transition is made but the response is defered. This is one of the issues exercising the group including myself, Steve Mellor, Jim Rumbaugh and Bran Selic that is going to make a submission in response to the OMG's request for Precise Action Semantics in UML. By the way Carolyn, the next collaboration meeting is in Boston next week. The meeting is open so it would be good to see some of the Pathfinder people there. I'll send you details if you are interested. > "Levkoff, Bruce" wrote to shlaer-mellor-users: > -------------------------------------------------------------------- > > I believe I first saw the HOLD event capability in docs from Kennedy-Carter. Bruce, unfortunately I am currently in a little known backwater of civilisation called the UK. We get "Embedded System Programming (Europe)" only and your article does not appear in it this month. Is there anywhere that we could get a copy of it ? The ESP Web Site does not provide a link. Thanks, Ian Wilkie ===================================================== Kennedy Carter Ltd. Tel: (+44) 1483 483 200 Fax: (+44) 1483 483 201 http://www.kc.com e-mail: ian@kc.com ===================================================== Subject: Re: (SMU) ESP Article baryh@airmail.net (Bary Hogan) writes to shlaer-mellor-users: -------------------------------------------------------------------- >lahman writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >Responding to Fontana... > >> > In both cases, though, the HOLD would be temporary >> > rather than the intrinsic STT entry of, say, IGNORE. >> >> Actually it *is* an STT entry, exactly like IGNORE. > >Yes, but my point was that in so doing the nature of the STT has changed from >static to dynamic because that cell of the STT is now multivalued. If one >simply places HOLD in the cell that is, at best, misleading because it leaves >out crucial information about the state where the transition goes when the HOLD >is not in effect and about the nature of the conditions under which the HOLD >applies. If one is going to use HOLD in an STT, then there has to be some other >notational kludge to supply this information. By placing HOLD in a particular cell, you are saying that the event in question is always held in that state (i.e. the HOLD is always in effect). Only when a different event is received, and a different state is entered in which the event in question is not held, can you accept that event. Since there are no conditions except for the current state and event, I would argue that the cell is not multivalued. Bary Hogan LMTAS Subject: Re: (SMU) ESP Article pete.norton@gecm.com writes to shlaer-mellor-users: -------------------------------------------------------------------- > lahman writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Responding to Fontana... > > > > In both cases, though, the HOLD would be temporary > > > rather than the intrinsic STT entry of, say, IGNORE. > > > > Actually it *is* an STT entry, exactly like IGNORE. > > Yes, but my point was that in so doing the nature of the STT has changed from > static to dynamic because that cell of the STT is now multivalued. If one > simply places HOLD in the cell that is, at best, misleading because it leaves > out crucial information about the state where the transition goes when the HOLD > is not in effect and about the nature of the conditions under which the HOLD > applies. If one is going to use HOLD in an STT, then there has to be some other > notational kludge to supply this information. > I guess, from reading some of the other messages in this thread, that the event is held until the object enters some other state where the event is not held (preferably not ignore or can't happen I'd hope ;-). I should think that KC have taken the easy option and made the HOLD take precedence for the whole time that the object is in such a state. -- Pete Norton Alenia Marconi Systems Portsmouth, UK. Subject: RE: (SMU) ESP Article "Peter J. Fontana" writes to shlaer-mellor-users: -------------------------------------------------------------------- H.S said: > If one is going to use HOLD in an STT, then there has > to be some other > notational kludge to supply this information. Hogan and Norton have covered this well. HOLD is a valid cell value for an STT. Also - while KC introduced this in the Shlaer-Mellor world through their OOA-97 paper, you can also see HOLD defined (although with varying rigor) in most UML books. Subject: Re: (SMU) ESP Article "David Harris" writes to shlaer-mellor-users: -------------------------------------------------------------------- pete.norton@gecm.com wrote: > pete.norton@gecm.com writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > > lahman writes to shlaer-mellor-users: > > -------------------------------------------------------------------- > > > > Responding to Fontana... > > > > > > In both cases, though, the HOLD would be temporary > > > > rather than the intrinsic STT entry of, say, IGNORE. > > > > > > Actually it *is* an STT entry, exactly like IGNORE. > > > > Yes, but my point was that in so doing the nature of the STT has changed from > > static to dynamic because that cell of the STT is now multivalued. If one > > simply places HOLD in the cell that is, at best, misleading because it leaves > > out crucial information about the state where the transition goes when the HOLD > > is not in effect and about the nature of the conditions under which the HOLD > > applies. If one is going to use HOLD in an STT, then there has to be some other > > notational kludge to supply this information. > > > > I guess, from reading some of the other messages in this thread, that the event is held until the object enters some other state where the event is not held (preferably not ignore or can't happen I'd hope ;-). I should think that KC have taken the easy option and made the HOLD take precedence for the whole time that the object is in such a state. > > -- > > Pete Norton > Alenia Marconi Systems > Portsmouth, UK. With the KC tool one specifies a state transition diagram (STD) and a state transition table (STT). Where a transition is defined on the STD this is filled in on the STT and may not be over ridden. For every other event/state (except the delete state) combination the analyst defines the action to be taken, either IGNORE, CANNOT HAPPEN or HOLD. Which ever action is chosen always happens for the state/event combination and holds for the entire time the state is occupied. The HELD event remains on the event queue for processing when a subsequent state is entered. It is not unknown for an event that has been held to subsequently be ignored without being processed, however transition to a state where it CANNOT HAPPEN would be an analysis error. Dave Harris Subject: (SMU) Anonymous event notification Allen Theobald writes to shlaer-mellor-users: -------------------------------------------------------------------- Greetings! I've noticed that in several examples objects send events directly to other objects (target is known ahead of time) rather than "posting" an event anonymously and allowing those wishing to know to be notified when an event occurs. I don't have my S-M books immediately available. What does S-M say about this? Kind Regards, Allen Nova Engineering, Inc. Cincinnati, OH, USA Subject: Re: (SMU) ESP Article lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Hogan... > By placing HOLD in a particular cell, you are saying that the event in question > is always held in that state (i.e. the HOLD is always in effect). Only when a > different event is received, and a different state is entered in which the event > in question is not held, can you accept that event. Since there are no > conditions except for the current state and event, I would argue that the cell > is not multivalued. The interpretation of HOLD is in the eye of the beholder. To me the HOLD simply means that if the event is encountered while the FSM is in a particular state, the event's processing must be deferred until some condition is satisfied. If the condition is already satisfied when the event is received, then the event could be processed immediately. This is exactly what happens to externally generated events when self-directed events are processed -- the externally generated event is blocked regardless of the events, transitions, and states being processed and it remains blocked until the condition is satisfied (i.e., no more self-directed events are in the queue). But if there are no self directed events initially on the queue, the externally generated event could be processed immediately. I see the same possibility (the condition is already satisfied) here. Therefore the cell would have to be multivalued. I could carry this further, in that I believe one could interpret 'some condition' to refer to the state of the instance or the state of the domain itself. Thus one might expect the event to remain blocked until, say, some attribute value was updated by some other object's instance or even a synchronous wormhole. For example, if the only event on the queue were the blocked HOLD event, the entire domain might remain inactive until the attribute got an appropriate value so that the block could be cleared and processing continued. In this case no events or transitions are necessary to unblock the held event. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Anonymous event notification lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Theobald... > I've noticed that in several examples objects send events directly to > other objects (target is known ahead of time) rather than "posting" an > event anonymously and allowing those wishing to know to be notified > when an event occurs. > > I don't have my S-M books immediately available. What does S-M say > about this? S-M decrees that in the notation the event must have a destination (i.e., the state model that will receive the event). Moreover, a particular event can be directed at only one state model. At the risk of lighting a toasty fire, I think this is a problem. It strikes me as inconsistent with the way one should try to design state machines (i.e., independent of context). I would prefer that the notation allow an event to be generated anonymously in an action and let the analyst define separately where it goes. Where the event goes should be a domain level issue, not an FSM level decision. I would like a separate table where the analyst defined that event X always goes to state model Y. Interestingly, the process models take this one step further by defining the specific instance (i.e., the specific state machine), even though OL:MWS specifically says the destination is the state model, not the state machine. One can argue that the process model is a different level of abstraction than the state model, so no harm/no foul when it deals with instances. But I have a hard time buying that. Unfortunately, the process models are the logical place to manipulate the necessary identifiers for IM relationship navigation to a particular instance. And the process models are privy to the instance state that determines conditional navigation to specific instances. So it is hard to see how one would move that out of the process models without an awful lot of duplication or redundancy. The destination has always bothered me, but I don't have any bright ideas for alternatives. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) ESP Article Carolyn Duby writes to shlaer-mellor-users: -------------------------------------------------------------------- At 01:45 PM 11/8/99 +0000, you wrote: >ian@kc.com (Ian Wilkie) writes to shlaer-mellor-users: >Carolyn Duby wrotes to shlaer-mellor-users: >-------------------------------------------------------------------- > >> In UML the analyst may mark a set of events to be deferred within a state. >> A deferred event is saved until the active object enters a state where the >> event is not deferred. See page 249 of the UML Reference Manual and pages >> 292 and 297 of the UML User Guide for more information. > >Yes, this is true. However, the mechanism is slightly different from the >OOA97 HOLD effect. In OOA97, HOLD is considered to be an intrinsic part of >the event processing cycle for the state machine. In UML, there is actually >a "defer" action, which the user puts as a response to the event. A clear >mapping from OOA97 HOLD to UML defer can be found - Each OOA97 hold effect >becomes an "internal" transition (i.e. to the same state) with a single >transition action which is "defer". > >However, UML implicitly allows more freedom (or rope if you prefer). For >example, you could have an action sequence on a transition which does >a few operations then "defers" the event. Not only is model questionable, >but the semantics of the defer are difficult, since the transition is made >but the response is defered. I don't think this is the intention of the definition as it is called "deferred event" rather than deferred action. Although now that I go back and read the definition I can see how it could be interpreted as you stated. I think the UML User Guide is a little more clear. For example page 291-292 lists the parts of the state. Entry and exit actions are listed separately from deferred events. Page 297 describes an deferred event as being postponed until later. I agree that having an action followed by a defer would be a questionable modeling construct. Carolyn ________________________________________________ Pathfinder Solutions Inc. www.pathfindersol.com 888-OOA-PATH effective solutions for software engineering challenges Carolyn Duby voice: +01 508-673-5790 carolynd@pathfindersol.com fax: +01 508-384-7906 ________________________________________________ Subject: RE: (SMU) Anonymous event notification David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Lahman wrote: > S-M decrees that in the notation the event must have a > destination (i.e., the state model that will receive the > event). Moreover, a particular event can be directed at > only one state model. > At the risk of lighting a toasty fire, I think this is a > problem. > The destination has always bothered me, but I don't have any > bright ideas for alternatives. There are 2 issues: sending an event to an instance of unknown class, and sending a broadcast event. There are several mechanisms/analysis patterns that reduce these problems to a tolerable level. When you want send an message to an unknown object, you can separate identity from class using subtyping. Lets say that you have 2 objects that you want to send an event to. You introduce a supertype (name it, say, event-X reciever), and make sure its identifier is a subtype of its subtypes. You can then send messages to the supertype. Obviously, this is horrendous modeling. So you don't do that. Instead, you realize the need, and use this as a trigger for more detailed anaysis of the problem. You find that either there is a useful abstraction (that you missed) in the problem domain, or there isn't. If there is, then you adjust the model to intoduce the newly discoved supertype. If there isn't, then you probably have domain pollution. If you have domain pollution, then you factor your domain, and use the bridge to determine the appropriate recipient (possibly a polymorphic object in the other domain). The other situation is the broadcast event. Here, you probably know the class of the recipient, but not the identity. The "class" may be a supertype object) For this, you analyse the problem to discover what the broadcast event really _is_ in the problem domain. If it _isn't_, then you look to other domains. If it is part of the problem domain, then your analysis will reveal an object for the broadcast event. Either the creation state of that object searches for a recipient, or you notify an assigner (or other object) that the event-object has been created (note that the event-object has a domain-specific name). Eventually, a recipient of the event-object will form a relationship with it, and then use it. The only way to broadcast an event without using an object is to go to another domain. Then an SDFD can look for an object in the other domain. I've never yet found a situation where a broadcast event cannot be decomposed into either an event-object or a multi-domain solution. Often, the need to broadcast is an indication that there's something wrong. (If you want to dispute this, then provide an example :-)) Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: Re: (SMU) ESP Article "Neal Welland" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lahman.... Now how did I know you would want to know more? lahman on 08/11/99 13:32:02 Please respond to shlaer-mellor-users@projtech.com To: shlaer-mellor-users@projtech.com cc: (bcc: Neal Welland/MAIN/MC1) Subject: Re: (SMU) ESP Article lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Welland... > >From my somewhat dim recollection, the HOLD mechanism was introduced due to some > pressure from within the SM user community, particularly the telecomms > fraternity. At the time (early 96), we were attempting to migrate to SM, but > when analysing the functionality of our existing system, it seemed there was the > absolute need for an ability to "prioritise" the set of possible events that > could be processed next. We referred to this as "state machine collapse". Part > of the problem seemed to be centred around the fact that SM uses Moore state > models. Our application seemed better suited to a Mealy state model. I am curious if you have a specific example. I have never personally encountered a situation where it (Mealy vs, Moore) made a difference. While the specific states and transitions might be different under each approach, I would think they are convertible so that there would not be an 'absolute need' to prioritize (other than self directed events) -- at least because of the model used. NAW: As Ian Wilkie indicated in his response, the application was telephony based, call control to be a little more explicit. My reference to Mealy and Moore state models was merely (excuse the pun) an attempt to recall what had initiated the work at KC. Without digging into the archives, further searches of the grey matter suggest that the main problems were the potential proliferation of additional states when dealing with unexpected events (I include events arriving in the "wrong" order). Additional states can make a STD highly unreadable. Because a Mealy model associates action with the transition, rather than the state, It was felt that a Mealy STD, would be easier to extend (more arrows are better than more states?) in these circumstances. I am not a proponent of either model. Like you I believe you can convert from one to the other. However, in this application, with already "large" state models, the problems of dealing with a multitude of unexpected events, seemed to be better dealt with using the Mealy model. Neal. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) ESP Article ian@kc.com (Ian Wilkie) writes to shlaer-mellor-users: -------------------------------------------------------------------- > lahman wrote to shlaer-mellor-users: > -------------------------------------------------------------------- > The interpretation of HOLD is in the eye of the beholder. > > To me the HOLD simply means that if the event is encountered while the FSM is in a > particular state, the event's processing must be deferred until some condition is > satisfied. If the condition is already satisfied when the event is received, then > the event could be processed immediately. This is exactly what happens to externally > generated events when self-directed events are processed -- the externally generated > event is blocked regardless of the events, transitions, and states being processed > and it remains blocked until the condition is satisfied (i.e., no more self-directed > events are in the queue). But if there are no self directed events initially on the > queue, the externally generated event could be processed immediately. I see the same > possibility (the condition is already satisfied) here. Therefore the cell would have > to be multivalued. > > I could carry this further, in that I believe one could interpret 'some condition' to > refer to the state of the instance or the state of the domain itself. Thus one might > expect the event to remain blocked until, say, some attribute value was updated by > some other object's instance or even a synchronous wormhole. For example, if the > only event on the queue were the blocked HOLD event, the entire domain might remain > inactive until the attribute got an appropriate value so > that the block could be cleared and processing continued. In this case no events or > transitions are necessary to unblock the held event. While one can certainly define the behaviour of HOLD to be as you describe, this was certainly not the intent of what we wrote in the OOA97 paper. The mechanism we described there was that if the effect (STT cell entry) define for the combination event E1/state S1 was HOLD, then the event E1 is *never* dequeued and processed while the instance exists in the state S1. Thus the event processing algoritm for an instance goes from: loop forever next_event = find_next_event_on_queue() if ( next_event != NULL ) then switch effect(next_event,current_state) case 'Transition' remove_event_from_queue(next_event) new_state = find_destination_state(next_event,current_state) execute_state_action(new_state) current_state = new_state case 'Cannot Happen' remove_event_from_queue(next_event) raise_exception() case 'Ignore' remove_event_from_queue(next_event) endswitch else wait_for_new_event_in_queue() # Blocks endif endloop to become.... loop forever next_event = NULL foreach event_in_queue in {all_events_in_queue} if effect(event_in_queue, current_state) != 'Hold' then next_event = event_in_queue break endif endloop if ( next_event != NULL ) then switch effect(next_event,current_state) case 'Transition' remove_event_from_queue(next_event) new_state = find_destination_state(next_event,current_state) execute_state_action(new_state) current_state = new_state case 'Cannot Happen' remove_event_from_queue(next_event) raise_exception() case 'Ignore' remove_event_from_queue(next_event) endswitch else wait_for_new_event_in_queue() # Blocks endif endloop Note: The above description is not perfect, but I hope it illustrates the behaviour wrt Hold. (Specifically, event priorities are not dealt with). This is broadly the way our simulator and architecture processes Hold. Perhaps I should update the OOA97 paper to make this clear. Subject: Re: (SMU) Anonymous event notification Allen Theobald writes to shlaer-mellor-users: -------------------------------------------------------------------- Dave Whipp wrote: > I've never yet found a situation where a broadcast event cannot be > decomposed into either an event-object or a multi-domain > solution. Often, the need to broadcast is an indication that there's > something wrong. (If you want to dispute this, then provide an > example :-)) Nah! I can't dispute this. :^) What I was thinking when I asked the question was a situation where, when the temperature changes, a temperature sensor would post the change to the UI bridge. In the UI you might have several objects receiving the event: a temperature graph (that tracks temperature over time), and maybe a thermometer (that just reflects the new temperature). Kind Regards, Allen Nova Engineering, Inc. Cincinnati, OH, USA, 45246 http://www.nova-eng.com Subject: Re: (SMU) ESP Article lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Wilkie... > While one can certainly define the behaviour of HOLD to be as you describe, > this was certainly not the intent of what we wrote in the OOA97 paper. Yes. But FWIW... > forever > > next_event = NULL > foreach event_in_queue in {all_events_in_queue} > if effect(event_in_queue, current_state) != 'Hold' then > next_event = event_in_queue > break > endif > endloop They key here is how 'effect' determines its return value. K-C's OOA97 implementation of 'effect' has the result depend solely on the combination of event and state. In that case HOLD is permanent in the STT so the cell is not multivalued. This is an aesthetically pleasing solution because it is clean within the existing notation. But it comes at a cost of solving a rather narrow range of problems (i.e., where condition depends solely on the state of a particular state machine). My assertion is that my interpretation is the most general in that 'effect' can make its decision based upon almost any criteria based on the domain state. For example, one can consider whether the entire domain is ready to accept a particular external event. Alas, this has some nasty side effects. K-C's implementation does not require colorization or specialized translation rules while mine very likely would (i.e., the implementation of 'effect' would depend upon external information specific to the application). The alternative would be to provide some additional notational support at the OOA level to describe the HOLD condition (e.g., something akin to Whipp's supplementary DFDs). My original point was that in the general case HOLD in the STT changes the nature of the STT. There are two choices. You can narrow the interpretation of HOLD until that problem is eliminated, as in K-C's approach. Or you can provide more information to supplement the STT that describes the condition and the result for that state if the HOLD is removed. Today that extra information would have to be provided to the translation; hopefully in the future it could be provided in the OOA notation for those cases where it is a problem space issue. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Anonymous event notification lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > > The destination has always bothered me, but I don't have any > > bright ideas for alternatives. > > There are 2 issues: sending an event to an instance of > unknown class, and sending a broadcast event. I guess I wasn't clear about this. My issue is around where you specify a known destination. So I am not worried about broadcast events; I don't have a problem with an event being restricted to going to a single model or instance. And I am not trying to resolve an unknown subtype in hand into some known destination. Presently the S-M notation requires the event destination be defined to the instance level in a state action (i.e., at the event generator process). My problem is that I think the destination of an event at the state action level should always be unspecified because that is context information (unless, perhaps, it is self-directed). Instead, I want the analyst to resolve the destination at the domain level (i.e., OCM level) where context is relevant. I can rationalize the argument that the ADFD is a orthogonal notation to the STD so that static IM relationship navigation to resolve data stores and instance identifiers if OK. But that is the world of data; it is a static world that simply supports the dynamic world. I have a real hard time rationalizing collecting instance identifiers as a requirement for generating an event, though, because that is dynamic context, which is where state machines live. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Anonymous event notification "David Harris" writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > > I can rationalize the argument that the ADFD is a orthogonal notation to > the STD so that static IM relationship navigation to resolve data stores > and instance identifiers if OK. But that is the world of data; it is a > static world that simply supports the dynamic world. I have a real hard > time rationalizing collecting instance identifiers as a requirement for > generating an event, though, because that is dynamic context, which is > where state machines live. > Is it not the case that the IM shows the static case, IE the objects that may be insantiated and the relationships that may link them. However the actual instances of objects and relationships exist at specific times, EG when a particular state is occupied. Therefore the actual instances represent dynamic information which may reasonably be used within a specific state. Dave Harris Subject: Re: (SMU) Anonymous event notification Simon Wright writes to shlaer-mellor-users: -------------------------------------------------------------------- Respnding to Lahman... > Presently the S-M notation requires the event destination be defined to > the instance level in a state action (i.e., at the event generator > process). My problem is that I think the destination of an event at the > state action level should always be unspecified because that is context > information (unless, perhaps, it is self-directed). Instead, I want the > analyst to resolve the destination at the domain level (i.e., OCM level) > where context is relevant. Isn't it the case that actions often only generate events at all because of the need to create a cooperating web of objects? And it may well be that the models I'm used to are bad models, but the target instance often depends on the originating event and the result of complex navigations from 'here' ... a bit hard to put in on the OCM? -- Simon Wright Email: simon.j.wright@gecm.com Alenia Marconi Systems Voice: +44(0)2392-701778 Integrated Systems Division FAX: +44(0)2392-701800 Subject: Re: (SMU) Anonymous event notification lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Harris... > Is it not the case that the IM shows the static case, IE the objects that > may be insantiated and the relationships that may link them. However > the actual instances of objects and relationships exist at specific times, > EG when a particular state is occupied. Therefore the actual instances > represent dynamic information which may reasonably be used within a > specific state. That information, concerning the dynamics of instances, is about the context in which a particular state machine lives. It is relevant at the OCM or domain level. But a state machine should not have knowledge of its context. An individual state should not even know what the previous state was. Just as a state should not know where the event that transitioned to it came from, it should not know where an event that its action generated is going. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Anonymous event notification lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Wright... > Isn't it the case that actions often only generate events at all > because of the need to create a cooperating web of objects? Yes and no. Ideally an event that is not self-directed should be an announcement that some life cycle activity has completed. Such announcements may be crucial to overall domain functionality but they should depend only on the life cycle of the instance, not its context. Your 'web of cooperation' comes into play to the extent that the analyst decides what other objects might be interested in such an announcement. Thus the state machine broadcasts to the world while the analyst filters to an interested party. I am arguing that the analyst's filtering should be done outside the state machine. > And it may well be that the models I'm used to are bad models, but the > target instance often depends on the originating event and the result > of complex navigations from 'here' ... a bit hard to put in on the > OCM? One has to model with the notation that one has, so there isn't a lot of choice. But if one hypothesizes a notation change, then other possibilities open up. For example, there is nothing to prevent one from associating just the chain of referential attribute accessors to identify the target with the event rather than the action (i.e., in a separate DFD tagged to the event). For the routine cases this might even work rather well. But there are a couple of problems that make this messy in general. --The state action may need the same (or part of) the same accessor chains to obtain data it needs and that leads to a redundancy between the two diagrams. --When a lot of events are being generated, one has a lot of mini-DFDs floating around that may be annoyingly difficult to keep track of. --When events are generated conditionally one would like to see the basis for that decision in the action, but if it depends upon whether the target instance exists one is back on a contextually slippery slope again. You can get around this particular problem by modifying the state machine, but one could argue that such a modification is a manifestation of context in itself. As I indicated, I don't have any clever ideas for overcoming these problems. Moreover, I can play Devil's Advocate to myself by arguing that it is possible that the problem is insoluble in that it may be impossible to separate such context from the state machines. Two problems cases come to mind: Taking my argument to your first point to its logical conclusion, the state machine should issue an announcement event about everything it does, just in case somebody is interested. These events might be mostly ignored in a particular domain because nobody else cares. Nobody I know of builds state machines this way and I think it could get pretty ugly. So the analyst always does some contextual filtering in the actions by selecting which particular announcements will be interesting. Another example is a very common construct where an action sends an event to each member of a subset of the instances on a the other side of a 1:M. The subset is clearly a dynamic context and it is difficult to imagine a way to get around this in any reasonable fashion. So basically we have a conundrum: how does one reconcile that state machines should be context independent while generating events is context dependent. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Anonymous event notification lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Theobald... > What I was thinking when I asked the question was a situation where, > when the temperature changes, a temperature sensor would post the > change to the UI bridge. In the UI you might have several objects > receiving the event: a temperature graph (that tracks temperature over > time), and maybe a thermometer (that just reflects the new > temperature). The short answer is that bridges aren't modeled in the same way and can have different rules since they are regarded as being part of the Architecture rather than the OOA. But your comment raises an interesting Deep Philosophical Issue related to the wormhole paper that garnered plenty of discussion when it first came out of the closet. That paper seems to suggest that only syntactic shifts (to use Whipp's term) are remapped, such as event E1 becoming event K7 in the other domain or a units conversion on a data packet member. It does not seem to allow semantic shifts, such as one event begetting multiple events. Personally I think semantic shifts are necessary to accommodate porting of domains. For example, a common problem is that one domain makes requests at a high level (e.g., TestIt) while the other domain expects requests at a low level (e.g., SetupTest, InitiateTest, FetchTestResults). -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Anonymous event notification Gregory Rochford writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > > lahman writes to shlaer-mellor-users: > -------------------------------------------------------------------- > /snip/ > > So basically we have a conundrum: how does one reconcile that state machines > should be context independent while generating events is context dependent. > > -- Certainly a state is context independent in that there is no information on previous state. That said, in a state the processing that occurs could possibly be dependent on every attribute in the IM (and other domains via wormholes or implicit bridging). The goal when modeling is to reduce those dependencies via layering of responsibilities on the (preliminary) OCM. So IMHO there's nothing to reconcile, it's a fact of life. :) And you deal with it by minimizing the dependencies between the state models in your domain. -- Project Technology -- Shlaer/Mellor OOA/RD Instruction, Consulting, BridgePoint, Architectures -------------------------------------------------------- Gregory Rochford grochford@projtech.com 214.350.7616 URL: http://www.projtech.com Subject: RE: (SMU) Anonymous event notification David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- > As I indicated, I don't have any clever ideas for overcoming > these problems. Moreover, I can play Devil's Advocate to myself > by arguing that it is possible that the problem is insoluble in > that it may be impossible to separate such context from the > state machines. Two problems cases come to mind: The mere fact that a problem is insoluble should not stop us from trying. The only thing that should stop us trying would be the conclusion that the problem isn't a problem. (a quibble: a devil's advocate would argue the latter, not the former) Let me start by saying that I have some sympathy with the idea that event generators should not know their destination. It does appear to add coupling between objects. I will also note that your comment about self-directed events (that they *should* know their destination) implies that they are a different kind of thing. A notation that merges 2 kinds of thing into one concept will have problems. So lets completely ignore self-directed events: we can get rid of them later. Your basic idea is that an event is an indication that something has happenned, and that the object has no idea who wants to know this fact. This would imply that all events should have broadcast semantics. The problem with broadcasting is that the receiver must know who to listen for. All that you do is moved the coupling from the sender to the receiver. The event would have to carry the identity of the sender, and the receiver will act as an observer on specific sender-ids. I don't see any benefit. This last paragraph supports the idea that the problem may be insoluble. It doesn't say that its not a problem. However, lets increase the scope. SM says that the unit of reuse is the domain, and accepts that objects within a domain may be more tightly coupled than in other methods. Furthermore, it defines [or perhaps not] the concept of the bridge to act as a powerful mechanism for decoupling. Across a bridge, neither sender nor receiver know anything about each other. The mapping could be 1:1, 1:M, M:1 ... even M:M. So SM would argue that the event-coupling is not a problem. It is within a domain, and a domain is an integrated unit. An implication of this is that decoupling is only required (desired) between subject matter, not with a subject-matter. I'd have to say that the jury is still out on this. Personally, I like to augment the domain with additional mechanisms to reduce intra-domain coupling. My DFD which you refer to is one such mechanism. By moving the data to the object, the object never needs to know where its data came from. The cost of this is that you have to maintain the data transport network (i.e. the DFD). There is no semantic shift in the DFD, so it is potentially simpler than a bridge, but this does not completely eliminate the cost of decoupling. So what this boils down to is that you have to pay the cost of complexity somewhere. Either you couple the objects, and pay the cost later in more difficult maintenance; or you build decoupling mechanisms, and pay the cost in more tedious maintenance. The advantage of the latter is that its probably easier to build mechanised support for tedious maintenance than for difficult maintenance. So, is there a problem? Is it soluble? Draw your own conclusions! Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: Re: (SMU) Anonymous event notification Greg Eakman writes to shlaer-mellor-users: -------------------------------------------------------------------- Allen Theobald wrote: > > Allen Theobald writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Greetings! > > I've noticed that in several examples objects send events directly to > other objects (target is known ahead of time) rather than "posting" an > event anonymously and allowing those wishing to know to be notified > when an event occurs. > > I don't have my S-M books immediately available. What does S-M say > about this? > > Kind Regards, > > Allen > Nova Engineering, Inc. > Cincinnati, OH, USA What you describe is the Observer pattern in the Design Patterns book, also known as publish-subscribe. Essentially, an object announces that it manages an event. Other objects can then subscribe to be notified when the event occurs by registering their interest with the publishing object. When the publishing object detects the event occur, it notifies all subscribers. This pattern is used widely in GUI frameworks, X-Windows, and Java AWT, though the implementations vary. Subscribers register interest in mouse movements, button clicks, etc. In SM terms, the basic semantics do not support the idea of broadcasts, but room is clearly left to build it. From an analysis perspective, you could analyze the problem and capture it as a support domain. A PublishedEvent and a RegisteredSubscriber object with a 1:M relationship should cover the basics. Of course, you'll need some way of specifying in the analysis the action to be invoked when the event occurs, the equivalent of C function pointers. Another way would be to extend the architecture domain to provide these services. Greg -- _______________________________________________________ Pathfinder Solutions Inc. www.pathfindersol.com | 888-OOA-PATH | | effective solutions for software engineering challenges| | Greg Eakman voice: +01 508-384-1392 | grege@pathfindersol.com fax: +01 508-384-7906 | _______________________________________________________| Subject: RE: (SMU) Anonymous event notification David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- > Another example is a very common construct where an action > sends an event to each member of a subset of the instances > on a the other side of a 1:M. The subset is clearly a > dynamic context and it is difficult to imagine a way to > get around this in any reasonable fashion. Many people who write about OOD have the concept that messages can only be sent along relationships. You could apply this concept to SM-OOA. The knowledge by one instance of a subset subset of instances of another object is a relationship. By adding additional relationships you could have a relationship for each subset relationship. You could then enhance the OCM notation to associate each event with a relationship. This is essentially what my domain-DFD (which supercedes the OAM) does to direct data to its destination. Any change of this type would have many consequences. These will have to be thought through. Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: Re: (SMU) Anonymous event notification lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > The problem with broadcasting is that the receiver must know > who to listen for. All that you do is moved the coupling > from the sender to the receiver. The event would have to > carry the identity of the sender, and the receiver will > act as an observer on specific sender-ids. I don't see any > benefit. Why would the receiver need to know the identity of the sender? What I would like to see is that instance A1 broadcasts event E397. Meanwhile instance W14 accepts event E29. So far both state machines can be designed independently. Now the Analyst steps back and thinks. "To solve my overall problem W14 is going to have to get E29 from somewhere. Sonofagun, I just happen to have A1 generating E397 under exactly the right conditions. I'll just route E397 to A1 as E29 right here in my new Event Mapping Table. And I'll let that Architect, who already has too much time on his hands, figure out how to map the data in volts on E397 to millivolts on E29." In effect one is just using a new notation at the OCM level as a bridge to route events and map data packets between objects. > So what this boils down to is that you have to pay the cost > of complexity somewhere. Either you couple the objects, and > pay the cost later in more difficult maintenance; or you > build decoupling mechanisms, and pay the cost in more > tedious maintenance. The advantage of the latter is that > its probably easier to build mechanised support for tedious > maintenance than for difficult maintenance. Yes, I would just place that complexity in a supplementary mapping at the OCM level to allow the individual state models to be independent. My problem is I can't figure out a way to do this w/o introducing redundancy or that will elegantly handle all the situations. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Anonymous event notification lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Rochford... > Certainly a state is context independent in that there is no information > on previous state. That said, in a state the processing > that occurs could possibly be dependent on every attribute in the IM > (and other domains via wormholes or implicit bridging). The goal when > modeling > is to reduce those dependencies via layering of responsibilities on the > (preliminary) OCM. So IMHO there's nothing to reconcile, it's a fact of > life. :) I already stated that I can rationalize the attribute dependencies on the grounds that they represent the static IM so dependence upon attributes is not my problem. However, I see event generation as a different swamp. Events represent the dynamic flow of control. If a state is not supposed to know such intimate details as its own last or next state, how can one reconcile that it knows that some other object will *subsequently* process the event (even if it is then IGNOREd)? To me this says that the state now knows at least something about relative sequencing of the dynamic activities between objects -- kind of hard to justify if an instance is forbidden to know that about its own processing. B-) > And you deal with it by minimizing the dependencies between > the state models in your domain. This makes me suspect that we are talking past one another because this does not strike me as either relevant or possible at the event level. How does one do that? Once the responsibilities have been allocated to the objects, we are taught to develop each object's state machine independently based upon the intrinsic life cycle and behavior of the object. I wouldn't expect a lot of choices about what the states and events were for a given level of domain abstraction. Thus I expect the dependencies to be pretty well fixed (albeit indirectly) by the time one gets around to doing the domain's SMs. In practice what we do is design the state models without event generation and then make a second pass to see where the events, that are internal to the domain and needed for the defined transitions, should be generated. Again, at that point the number of dependencies is fixed, so it is just a question of where they are allocated rather than minimizing them. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Anonymous event notification lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > The knowledge by one instance of a subset subset of instances > of another object is a relationship. By adding additional > relationships you could have a relationship for each subset > relationship. You could then enhance the OCM notation to > associate each event with a relationship. Just to be sure I understand this... There would be multiple relationships between the same objects such that each relationship represented the particular 1:M set that was relevant for a particular circumstance. For example, if one had 1:M between Hunter and Aardvarks but Hunter might want to send one message to only ugly Aardvarks and another message to only tawdry Aardvarks, the would be a second 1:M' relationship collecting the ugly critters and a third 1:M'' relationship collecting the tawdry ones. In an action one would generate a single event E1 and a single E2. The OCM notation would cause E1 to be duplicated for each M' and E2 to be duplicated for each M''. The targeting to the specific instances would be handled automatically once E1 and E2 were associated with the relationships. It definitely has charm. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: RE: (SMU) Anonymous event notification David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Lahman wrote: > Why would the receiver need to know the identity of the > sender? You are right. Its possible to avoid this by using a routing mechanism > What I would like to see is that instance A1 broadcasts event > E397. Meanwhile instance W14 accepts event E29. So far both > state machines can be designed independently. Now the Analyst > steps back and thinks. "To solve my overall problem W14 is > going to have to get E29 from somewhere. Sonofagun, I just > happen to have A1 generating E397 under exactly the right > conditions. I'll just route E397 to A1 as E29 right > here in my new Event Mapping Table. But you still have to ensure, when A1, A2 and A3 all broadcast E397 (each with different supplemental data), that W14, W23, W31 and A7 all receive the correct supplemental data on their E29. If E29 maps to E397, then both relate to the same situation. But they are both in the same domain, so the analyst shouldn't define the same situation in 2 places. That violates 1 fact in one place. Each situation should be associated with exactly 1 event. It is possible that many objects might wish to generate or recieve that event, and that we may wish to decouple senders from receivers. ... > And I'll let that Architect, who > already has too much time on his hands, figure out how to map the data > in volts on E397 to millivolts on E29." In effect one is just using a > new notation at the OCM level as a bridge to route events and map data > packets between objects. ... It would be wrong, however, to attempt to completely decouple them within a domain. There should be no need to map event-data within a domain, because there should be no semantic shift between the 2. The attempt to encapsulate too much within one object is the trap of forgetting that analysis aims to expose, not to encapsulate, information within a subject matter. You aren't doing design! [and if you did wan't to map the values, it would be an analysis level mapping, not architectural]. Subject: RE: (SMU) Anonymous event notification David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Lahman wrote: > > The knowledge by one instance of a subset subset of instances > > of another object is a relationship. By adding additional > > relationships you could have a relationship for each subset > > relationship. You could then enhance the OCM notation to > > associate each event with a relationship. > > Just to be sure I understand this... > > There would be multiple relationships between the same > objects such that > each relationship represented the particular 1:M set that was relevant > for a particular circumstance. For example, if one had 1:M between > Hunter and Aardvarks but Hunter might want to send one message to only > ugly Aardvarks and another message to only tawdry Aardvarks, the would > be a second 1:M' relationship collecting the ugly critters and a third > 1:M'' relationship collecting the tawdry ones. > > In an action one would generate a single event E1 and a single E2. > The OCM notation would cause E1 to be duplicated for each M' and > E2 to be duplicated for each M''. The targeting to the specific > instances would be handled automatically once E1 and E2 were > associated with the relationships. Yes. Thats essentially what I'm saying. There are, of course, a few issues: How are self-directed events handled? Do you add relationships to events (do you allow relationship chains)? or do you add 0 or more events to relationships. In this case, can all the way-points on the chain receive the events? Do they need to explicity ignore events that just pass through them, or do they explicitly accept events that they are interested in? How does this interact with the current polymorhism mechanism. Can it be seen as a generalisation of polymorphism? What does this say about the other things that are special for the is-a relationship > It definitely has charm. This is going to ruin our reputations: we actually agree on something! :-) Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: Re: (SMU) Anonymous event notification Gregory Rochford writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > > lahman writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Responding to Rochford... > > > Certainly a state is context independent in that there is no information > > on previous state. That said, in a state the processing > > that occurs could possibly be dependent on every attribute in the IM > > (and other domains via wormholes or implicit bridging). The goal when > > modeling > > is to reduce those dependencies via layering of responsibilities on the > > (preliminary) OCM. So IMHO there's nothing to reconcile, it's a fact of > > life. :) > > I already stated that I can rationalize the attribute dependencies on the grounds > that they represent the static IM so dependence upon attributes is not my problem. > > However, I see event generation as a different swamp. Events represent the > dynamic flow of control. If a state is not supposed to know such intimate details > as its own last or next state, how can one reconcile that it knows that some other > object will *subsequently* process the event (even if it is then IGNOREd)? To me > this says that the state now knows at least something about relative sequencing of > the dynamic activities between objects -- kind of hard to justify if an instance > is forbidden to know that about its own processing. B-) An instance knows everything about its own processing :D Being in a certain state is everything it needs to know about what processing it needs to do at that point in time. There are object patterns where state models have event protocols to get something done (allocate a resource, whatever). The state models participating in that protocol are in close communication while they are executing that protocol. I'm having a problem seeing why this is a bad thing. > > > And you deal with it by minimizing the dependencies between > > the state models in your domain. > > This makes me suspect that we are talking past one another because this does not > strike me as either relevant or possible at the event level. How does one do > that? Once the responsibilities have been allocated to the objects, we are taught > to develop each object's state machine independently based upon the intrinsic life > cycle and behavior of the object. Given the responsibilities the object has. The behavior of the object depends on what its responsibilites are. (As Neil Lang says: "All examples are contained in the ODMS model" :) The OCM for the ODMS shows a flow of control from the Disk Request down through the Disk Ownership Assigner, the Disk, etc. In this OCM, control is distributed amongst all the state models. Consider an OCM where control was centralized in, e.g., the Disk object. The state models built with this strategy would be very different. The Disk has to know everybody's business. It has all the responsibilities. The OCM looks like a spider's web with the Disk at the center. This OCM has more dependancies than the one given as a solution in the course. That's what I meant by minimizing dependancies between state models in your domain, by the layering of reponsibilities (and communication) on the OCM. > I wouldn't expect a lot of choices about what > the states and events were for a given level of domain abstraction. Thus I expect > the dependencies to be pretty well fixed (albeit indirectly) by the time one gets > around to doing the domain's SMs. > > In practice what we do is design the state models without event generation and > then make a second pass to see where the events, that are internal to the domain > and needed for the defined transitions, should be generated. Again, at that point > the number of dependencies is fixed, so it is just a question of where they are > allocated rather than minimizing them. > So where do you do the OCM layering? Or is that implicit (i.e., in the analyst's head) in how you do your state models? best gr Subject: RE: (SMU) Anonymous event notification David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- > There are object patterns where state models have event protocols to > get something done (allocate a resource, whatever). The state models > participating in that protocol are in close communication while > they are executing that protocol. I'm having a problem seeing why > this is a bad thing. I can argue that many such protocols are an indication of pollution (solution-domain thinking). However, given that the protocol does exist, then we still want to keep things as simple as possible > That's what I meant by minimizing dependancies between state > models in your domain, by the layering of reponsibilities (and > communication) on the OCM. No one would argue that you shouldn't minimise the dependencies within a domain. An layering is a good first cut approach to this goal. But, once you have minimised them, can you minimise then even more. The answer would seem, by definition, to be "no". But, Lahman and I are discussing changes to the notation. If we can change the rules, then perhaps the answer can become "yes". You mention patterns. Let me suggest a ADFD pattern that is almost universal: event generators have a 'destination id' that comes from an accessor. This accessor may read referential attributes, or it may be a chained navigation. A further consideration is that many states within a state model may generate events to the same other-instance. If events can be associated with a relationship, then the accessors can be eliminated from the state models. The cost of this removal is that you have to associate events with relationships, and maintain this association. This might not gain very much. But the change introduces new possibilities. The sender and reciever can be decoupled. I have speculated that the new mechanism is a generalisation of polymorphic event mappings. If this is the case, then we may even be able to simplify the OOA-of-OOA as a result of this change. A second simplification is that event generators no longer need to carry 2 data parts: the id and the supplementary data. Instead, only one part is needed. This simplification would benefit dataflow languages such as SMALL. Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: Re: (SMU) ESP Article (Kennedy-Carter's HOLD in State Transition Bob Lechner writes to shlaer-mellor-users: -------------------------------------------------------------------- This message reacts to Lahman's conjecture and Wilkie's reply about K-C's use of HOLD; perhaps my grasp of the problem is naive - if so, thanks in advance for clarifying it. As I understand it, HOLD will make sure an event stays on the que, when an Active Instance (AI) object in some state wants to defer processing of this event until a subsequent state where the object can accept it. S&M:OLC only allows Transition, CannotHappen and Ignore entries in the STT (State Transition Table - [Ref OLC text 3.6 and Fig. 3.6.1 p. 51]. Kennedy-Carter extends this set of STT entries with the HOLD alternative. If an event is HELD on the queue while later events are processed, the event destination object may make a transition under a later event to a state where the HELD event WILL enable a transition. This entry is in the same event column of the STT (State Transition Table. Someone else said HOLD was a solution to the 'Waiting for K events' requirement (e.g., completion of K parallel process forks when the order of completion is unpredictable). S&M:OLC mention this problem in OLC 3.5 [Remembering Events, Fig 3.5.2 p. 49]. Paths for (K)! orderings must be provided in the naive solution. This grows exponentially with K (it does NOT scale up!). OLC's second alternative (Fig. 3.5.3) may be generalizable to entering a state which keeps track of a pre-specified set of completion-events until all forks report in. Kennedy-Carter's HOLD is a third alternative: it can retain completion-events from higher fork numbers in the input queue until lower-numbered forks report completion. Transitions will occur from states with HOLD actions to states with enabled transitions on the HELD event. This results in acceptance of the HELD events in some pre-specified order. Could a more conventional State Model interpreter handle this? One way would be for the state action for an event with a 'HOLD' on it to simply re-queue or regenerate this same event, with the same data values, by putting it back on the queue BEHIND other events already there. This is not equivalent to HOLDing it, because HOLD (I presume) keeps the HELD event in front of the queue not at its tail. Of course, data from the deferred event must be retained when re-queuing it. (In our naive implementation, the event is not deleted until AFTER the action returns, so event data can be copied into object data members or into generated events.) All intervening events must be processed before the tail is re-checked, whereas if the HELD event is kept at the front of the queue it is (I assume) re-inspected after a single event is processed. On the other hand, HOLD implies more dispatching overhead because it is reinspected and verified as still HOLDing before dispatching EACH next event - or does it?). Bob Lechner CS Dept, UMass Lowell lechner@cs.uml.edu Subject: Re: (SMU) Anonymous event notification Simon Wright writes to shlaer-mellor-users: -------------------------------------------------------------------- > Gregory Rochford writes to shlaer-mellor-users: > The OCM for the ODMS shows a flow of control from the Disk Request > down through the Disk Ownership Assigner, the Disk, etc. In this OCM, > control is distributed amongst all the state models. Consider an OCM > where control was centralized in, e.g., the Disk object. The state > models built with this strategy would be very different. The Disk has > to know everybody's business. It has all the responsibilities. The > OCM looks like a spider's web with the Disk at the center. I was recently being given the once-over by a senior Rational consultant about the RUP etc and I suddenly noticed a Controller object (on a sequence diagram). Hackles rose immediately. I also saw them in the Booch etc book (the Process one, forget its name). Seems that the standard way of doing things is to have a Controller object per Use Case, whose job is to coordinate it all. (It's clearly an artefact, cos there's nothing in the customer's world that corresponds to it). I don't quite know what's wrong with this, except something must be! I suspect it's rather like the problem you have before you've understood recursion, or how event-driven GUIs work. -- Simon Wright Email: simon.j.wright@gecm.com Alenia Marconi Systems Voice: +44(0)2392-701778 Integrated Systems Division FAX: +44(0)2392-701800 Subject: RE: (SMU) Anonymous event notification "Peter J. Fontana" writes to shlaer-mellor-users: -------------------------------------------------------------------- > Simon Wright writes to shlaer-mellor-users: > -------------------------------------------------------------------- > I was recently being given the once-over by a senior Rational > consultant about the RUP etc and I suddenly noticed a Controller > object (on a sequence diagram). Hackles rose immediately. > ... > I don't quite know what's wrong with this, except something must be! Generally the manifestation of a "Controller" shows an incomplete abstraction of that domain. Often times a controller used to coordinate consumers contending of some limited resource. While the coordination responsibilities are truly needed, they should be allocated to an object that acts as the "partitioning instance" (introduced in OOA96). EG: an instance of Customer enters an instance of Department of a store looking for an instance of Clerk from that Department. That instance of Department is the partitioning instance, and will act as the "controller". Another common duty for a Controller is to sequence a chain of activities to complete a non-trivial request. In this case, abstracting the Request itself, and allocating the coordination role to it is preferred to using a distinct Controller. _______________________________________________________ Pathfinder Solutions Inc. www.pathfindersol.com | 888-OOA-PATH | | effective solutions for software engineering challenges| | Peter Fontana voice: +01 508-384-1392 | peterf@pathfindersol.com fax: +01 508-384-7906 | _______________________________________________________| Subject: Re: (SMU) Anonymous event notification lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Rochford... > An instance knows everything about its own processing :D > Being in a certain state is everything it needs to know about what > processing it needs to do at that point in time. True, but that is the point. It knows nothing about its own sequence of processing *through* time. This is exactly what bothers me about events knowing where they are going. That implies that they know something about processing outside that point in time and, worse, outside their own instance. > There are object patterns where state models have event protocols to > get something done (allocate a resource, whatever). The state models > participating in that protocol are in close communication while > they are executing that protocol. I'm having a problem seeing why > this is a bad thing. Not all design patterns are good ideas. B-) I am with Whipp here in that it would make me want to look at my abstractions again -- if the purpose of the pattern is to manipulate events rather than to address some problem space need. OTOH, I don't necessarily see anything inconsistent with my point in that, particularly if the protocol is defined in the problem space. The state models reflect the responsibilities of particular abstractions (i.e., those in the design pattern). The events that cause transitions within each model could be simply defining the intrinsic life cycle. The fact that those events come from the other state machine results from the way that the Analyst solves the domain problem (i.e., by juxtaposing the objects in the patterns). > The OCM for the ODMS shows a flow of control from the Disk Request down > through > the Disk Ownership Assigner, the Disk, etc. In this OCM, control is > distributed > amongst all the state models. Consider an OCM where control was > centralized in, > e.g., the Disk object. The state models built with this strategy would > be > very different. The Disk has to know everybody's business. It has all > the > responsibilities. The OCM looks like a spider's web with the Disk at > the center. > > This OCM has more dependancies than the one given as a solution in the > course. > > That's what I meant by minimizing dependancies between state models in > your > domain, by the layering of reponsibilities (and communication) on the > OCM. I would argue that currently the OCM is fully derived. Therefore the dependencies reflect the selection of domain objects and the allocation of responsibilities to them. That happens before the SM is done. So I don't see the connection with the issue here concerning whether an event should know where it is going. > > In practice what we do is design the state models without event generation and > > then make a second pass to see where the events, that are internal to the domain > > and needed for the defined transitions, should be generated. Again, at that point > > the number of dependencies is fixed, so it is just a question of where they are > > allocated rather than minimizing them. > > > > So where do you do the OCM layering? Or is that implicit (i.e., in the > analyst's head) > in how you do your state models? I am not sure what you mean by 'OCM layering'. When a domain is defined one identifies the objects. The first pass is simplistic in trying to identify problem space abstractions without regard to functionality. The second pass identifies attributes. The third pass allocates the responsibilities for manipulating the attributes (i.e., defining the level of abstraction of the life cycles). Typically each pass results in new/modified objects as the overall view of domain behavior is refined. Finally, there is a pass with informal, high level use cases to ensure that the overall behavior is plausibly resolved. At this point we have a pretty good idea of the information flows between the objects (i.e., in effect, an informal OCM). But this is pretty late in the process -- which is why I contend dependencies are implicit in the abstractions defined. We assume that if we have defined the abstractions correctly everything will Just Hang Together when we do the SMs. [When the SMs are defined there is some context pollution because the analyst has a pretty good idea about the data flowing in an out and this inevitably affects the design of the SM. But we prefer to think of this as requirements on the life cycle. B-) So when it comes time to figure out where to generate the events, things go pretty smoothly and most of the adjustments are just syntactic.] -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Anonymous event notification lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > But you still have to ensure, when A1, A2 and A3 all > broadcast E397 (each with different supplemental data), that > W14, W23, W31 and A7 all receive the correct supplemental > data on their E29. On the simple cases, pulling the referential attribute accessor chains out of the action would do it. But that has its own redundancy problems. And I speculate that it would be insufficient in the general case. > If E29 maps to E397, then both relate to the same situation. But > they are both in the same domain, so the analyst shouldn't define > the same situation in 2 places. That violates 1 fact in one place. I don't see this particular problem (mapping E29 to E397). I am not convinced they are the 'same situation'. I see one situation where an event is generated to announce a state machine's completion of some activity. In the other situation an event is received that may affect another state machine's state. Lots of FSMs will be generating events and lots will be receiving events. One requires a separate fact, dependent upon the domain behavior, that sorts it all out by identifying the particular events that are related. Moreover, the events may not be syntactically the same (more below). > Each situation should be associated with exactly 1 event. It is > possible that many objects might wish to generate or recieve that > event, and that we may wish to decouple senders from receivers. ... S-M insists upon an event going to only one place (one object's state model). > > And I'll let that Architect, who > > already has too much time on his hands, figure out how to map the data > > in volts on E397 to millivolts on E29." In effect one is just using a > > new notation at the OCM level as a bridge to route events and map data > > packets between objects. > > ... It would be wrong, however, to attempt to completely decouple > them within a domain. There should be no need to map event-data > within a domain, because there should be no semantic shift > between the 2. The attempt to encapsulate too much within one > object is the trap of forgetting that analysis aims to expose, not > to encapsulate, information within a subject matter. You aren't > doing design! You lost me here. I wasn't attempting to decouple anything from the domain. I was merely pointing out that the events may not be exactly the same at the architectural level and one might need something more than a table lookup of an address to implement the mapping. (A problem one faces anyway, but now it can be associated with a particular mapping construct.) Then the second sentence seems to contradict the first (though I agree with it). Where I lost the thread completely was on the third sentence. What I thought we were talking about was the opposite -- pulling information out of objects and exposing it in a supplementary notation. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Anonymous event notification Gregory Rochford writes to shlaer-mellor-users: -------------------------------------------------------------------- David.Whipp@infineon.com wrote: > > David.Whipp@infineon.com writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > > There are object patterns where state models have event protocols to > > get something done (allocate a resource, whatever). The state models > > participating in that protocol are in close communication while > > they are executing that protocol. I'm having a problem seeing why > > this is a bad thing. > > I can argue that many such protocols are an indication of pollution > (solution-domain thinking). However, given that the protocol does > exist, then we still want to keep things as simple as possible > I meant a protocol such as the events that pass between the Clerk/Customer and the Clerk-Customer Assigner. I don't think there's anything there that's solution based. But anywway... "As simple as possible, but no simpler" :) > > That's what I meant by minimizing dependancies between state > > models in your domain, by the layering of reponsibilities (and > > communication) on the OCM. > > No one would argue that you shouldn't minimise the dependencies > within a domain. An layering is a good first cut approach to > this goal. But, once you have minimised them, can you minimise > then even more. > > The answer would seem, by definition, to be "no". But, Lahman and > I are discussing changes to the notation. If we can change the > rules, then perhaps the answer can become "yes". > Ah, engineers, always changing things. :) I think I understand what you want to do. (At least as far as using relationship instances to direct events) My first thought is: What is the cost/benefit of maintaining relationships with all instances that event communication occurs with vs. finding the identifier of the destination each time an event needs to be sent? If an instance is typically in a relationship with the destination instance, then this could be a win. And on the negative side: How many relationships are added to the OIM just for event communication? How much additional benefit is there in decoupling the state models in a domain? Do people really reuse objects at the domain level? > You mention patterns. Let me suggest a ADFD pattern that is > almost universal: event generators have a 'destination id' > that comes from an accessor. This accessor may read > referential attributes, or it may be a chained navigation. > > A further consideration is that many states within a state > model may generate events to the same other-instance. > > If events can be associated with a relationship, then the > accessors can be eliminated from the state models. The cost > of this removal is that you have to associate events with > relationships, and maintain this association. This might > not gain very much. > > But the change introduces new possibilities. The sender and > reciever can be decoupled. I have speculated that the new > mechanism is a generalisation of polymorphic event mappings. > If this is the case, then we may even be able to simplify > the OOA-of-OOA as a result of this change. > > A second simplification is that event generators no longer > need to carry 2 data parts: the id and the supplementary > data. Instead, only one part is needed. This simplification > would benefit dataflow languages such as SMALL. > > Dave. > > -- > Dave Whipp, Senior Verification Engineer > Infineon Technologies Corp., San Jose, CA 95112 > mailto:David.Whipp@infineon.com tel. (408) 501 6695 > Opinions are my own. Factual statements may be in error. -- Project Technology -- Shlaer/Mellor OOA/RD Instruction, Consulting, BridgePoint, Architectures -------------------------------------------------------- Gregory Rochford grochford@projtech.com 214.350.7616 URL: http://www.projtech.com Subject: Re: (SMU) Anonymous event notification lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > There are, of course, a few issues: > > How are self-directed events handled? Why any differently (other than performance)? The prioritization is a queue manager implementation issue once it knows the target FSM is the same as the origin and it has to know the origin anyway to do self-directed events. > Do you add relationships to events (do you allow relationship chains)? > or do you add 0 or more events to relationships. In this case, can > all the way-points on the chain receive the events? Do they need > to explicity ignore events that just pass through them, or do > they explicitly accept events that they are interested in? I would think the events go directly source-to-target instance. The need to walk the relationship(s) is only to define the set of target instances. I don't see why how long the chain should matter. > How does this interact with the current polymorhism mechanism. Can > it be seen as a generalisation of polymorphism? What does this say > about the other things that are special for the is-a relationship I would think S-M's version of polymorphism is unaffected. The target instance is the target instance is the target instance... The M just increases the list of targets for the event and the resolution to the leaf's FSM would be handled the same way as usual. In fact, I think the answer to all of these issues is that conceptually the mapping takes place before before the event is placed on the queue. Identification of the instances is just defined outside the FSM action but it would be associated with the event generator process. The result is a set of events being placed on the queue with addresses defined as usual -- there are just several placed at once with identical data packets and different addresses. Once they are on the queue, the queue manager would handle them the same way it would have if the action had done the navigating and spit them out individually as members of a set going into the event generator. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Anonymous event notification Gregory Rochford writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > > lahman writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Responding to Rochford... > > > An instance knows everything about its own processing :D > > Being in a certain state is everything it needs to know about what > > processing it needs to do at that point in time. > > True, but that is the point. It knows nothing about its own sequence of processing > *through* time. This is exactly what bothers me about events knowing where they are > going. That implies that they know something about processing outside that point in > time and, worse, outside their own instance. > > > There are object patterns where state models have event protocols to > > get something done (allocate a resource, whatever). The state models > > participating in that protocol are in close communication while > > they are executing that protocol. I'm having a problem seeing why > > this is a bad thing. > > Not all design patterns are good ideas. B-) I am with Whipp here in that it would > make me want to look at my abstractions again -- if the purpose of the pattern is to > manipulate events rather than to address some problem space need. > Ah, but it was an analysis pattern, so it must be good :) > OTOH, I don't necessarily see anything inconsistent with my point in that, particularly > if the protocol is defined in the problem space. The state models reflect the > responsibilities of particular abstractions (i.e., those in the design pattern). The > events that cause transitions within each model could be simply defining the intrinsic > life cycle. The fact that those events come from the other state machine results from > the way that the Analyst solves the domain problem (i.e., by juxtaposing the objects in > the patterns). > > > The OCM for the ODMS shows a flow of control from the Disk Request down > > through > > the Disk Ownership Assigner, the Disk, etc. In this OCM, control is > > distributed > > amongst all the state models. Consider an OCM where control was > > centralized in, > > e.g., the Disk object. The state models built with this strategy would > > be > > very different. The Disk has to know everybody's business. It has all > > the > > responsibilities. The OCM looks like a spider's web with the Disk at > > the center. > > > > This OCM has more dependancies than the one given as a solution in the > > course. > > > > That's what I meant by minimizing dependancies between state models in > > your > > domain, by the layering of reponsibilities (and communication) on the > > OCM. > > I would argue that currently the OCM is fully derived. Therefore the dependencies > reflect the selection of domain objects and the allocation of responsibilities to them. > That happens before the SM is done. So I don't see the connection with the issue here > concerning whether an event should know where it is going. > The OCM *can be* fully derived. > > > In practice what we do is design the state models without event generation and > > > then make a second pass to see where the events, that are internal to the domain > > > and needed for the defined transitions, should be generated. Again, at that point > > > the number of dependencies is fixed, so it is just a question of where they are > > > allocated rather than minimizing them. > > > > > > > So where do you do the OCM layering? Or is that implicit (i.e., in the > > analyst's head) > > in how you do your state models? > > I am not sure what you mean by 'OCM layering'. When a domain is defined one identifies > the objects. The first pass is simplistic in trying to identify problem space > abstractions without regard to functionality. The second pass identifies attributes. > The third pass allocates the responsibilities for manipulating the attributes (i.e., > defining the level of abstraction of the life cycles). Typically each pass results in > new/modified objects as the overall view of domain behavior is refined. Finally, there > is a pass with informal, high level use cases to ensure that the overall behavior is > plausibly resolved. > See OL:MWS p 89ff. In the State modeling course, there was a whole section on doing a preliminary OCM before you started doing state models. Instructors tended to emphasize if you didn't get this right, you state models would be a mess. It sounds like your third pass is where this happens. (And it sounds like you usually get it right :) > At this point we have a pretty good idea of the information flows between the objects > (i.e., in effect, an informal OCM). But this is pretty late in the process -- which is > why I contend dependencies are implicit in the abstractions defined. We assume that if > we have defined the abstractions correctly everything will Just Hang Together when we do > the SMs. [When the SMs are defined there is some context pollution because the analyst > has a pretty good idea about the data flowing in an out and this inevitably affects the > design of the SM. But we prefer to think of this as requirements on the life cycle. > B-) So when it comes time to figure out where to generate the events, things go pretty > smoothly and most of the adjustments are just syntactic.] > OK, gotta go to work now :) Subject: Re: (SMU) OCM layering Gregory Rochford writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > > lahman writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Responding to Rochford... > > > The OCM for the ODMS shows a flow of control from the Disk Request down > > through > > the Disk Ownership Assigner, the Disk, etc. In this OCM, control is > > distributed > > amongst all the state models. Consider an OCM where control was > > centralized in, > > e.g., the Disk object. The state models built with this strategy would > > be > > very different. The Disk has to know everybody's business. It has all > > the > > responsibilities. The OCM looks like a spider's web with the Disk at > > the center. > > > > This OCM has more dependancies than the one given as a solution in the > > course. > > > > That's what I meant by minimizing dependancies between state models in > > your > > domain, by the layering of reponsibilities (and communication) on the > > OCM. > > I would argue that currently the OCM is fully derived. Therefore the dependencies > reflect the selection of domain objects and the allocation of responsibilities to them. > That happens before the SM is done. So I don't see the connection with the issue here > concerning whether an event should know where it is going. > I was giving an example to your question of what I meant by minimizing dependencies between state models. > > > In practice what we do is design the state models without event generation and > > > then make a second pass to see where the events, that are internal to the domain > > > and needed for the defined transitions, should be generated. Again, at that point > > > the number of dependencies is fixed, so it is just a question of where they are > > > allocated rather than minimizing them. > > > > > > > So where do you do the OCM layering? Or is that implicit (i.e., in the > > analyst's head) > > in how you do your state models? > > I am not sure what you mean by 'OCM layering'. When a domain is defined one identifies > the objects. The first pass is simplistic in trying to identify problem space > abstractions without regard to functionality. The second pass identifies attributes. > The third pass allocates the responsibilities for manipulating the attributes (i.e., > defining the level of abstraction of the life cycles). Typically each pass results in > new/modified objects as the overall view of domain behavior is refined. Finally, there > is a pass with informal, high level use cases to ensure that the overall behavior is > plausibly resolved. > See OL:MWS p 89ff. In the State modeling course, there was a whole section on doing a preliminary OCM before you started doing state models. Instructors tended to emphasize if you didn't get this right, you state models would be a mess. It sounds like your third pass is where this happens. (And it sounds like you usually get it right :) Since you are such fine (and experienced) analysts, you always pick the appropriate resposibilities for the objects that results in state models that have minimal dependancies. Try to recall the first time you analyzed a domain. :) I'm trying to show that you are (implicitly) making a choice, and that different choices lead to different (more or less coupled) state models. It's not just luck, man. > At this point we have a pretty good idea of the information flows between the objects > (i.e., in effect, an informal OCM). But this is pretty late in the process -- which is > why I contend dependencies are implicit in the abstractions defined. We assume that if > we have defined the abstractions correctly everything will Just Hang Together when we do > the SMs. [When the SMs are defined there is some context pollution because the analyst > has a pretty good idea about the data flowing in an out and this inevitably affects the > design of the SM. But we prefer to think of this as requirements on the life cycle. > B-) So when it comes time to figure out where to generate the events, things go pretty > smoothly and most of the adjustments are just syntactic.] Subject: Re: (SMU) ESP Article (Kennedy-Carter's HOLD in State Transition lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lechner... > Kennedy-Carter's HOLD is a third alternative: it can retain completion-events > from higher fork numbers in the input queue until lower-numbered forks report > completion. Transitions will occur from states with HOLD actions to states with > enabled transitions on the HELD event. This results in acceptance of the HELD > events in some pre-specified order. I don't think this solves the problem when there are multiple events reporting completion. The first one processed will cause a transition out of the HOLD state, allowing the pending held event(s) to be processed, possibly prior to the rest of the completion reporting events being generated. I believe you still need an extra state (in another object) that receives and counts the completion reports in some fashion. That action would have to generate the event that causes the transition out of the HOLD state when the count was complete. [If one has an aversion to counting events, one could do a variation on 3.5.3 by creating multiple instances of the object. As each subprocess completes, it deletes one instance. If there is only one to delete, it also generates the event that transitions out of the HOLD state. To me this is just a clumsier way of counting events.] > Could a more conventional State Model interpreter handle this? > One way would be for the state action for an event with a 'HOLD' on it > to simply re-queue or regenerate this same event, with the same data > values, by putting it back on the queue BEHIND other events already there. > This is not equivalent to HOLDing it, because HOLD (I presume) keeps > the HELD event in front of the queue not at its tail. > Of course, data from the deferred event must be retained when re-queuing it. > (In our naive implementation, the event is not deleted until AFTER the > action returns, so event data can be copied into object data > members or into generated events.) Yes, this eliminates the need for the HOLD in the STT. But I'm not sure it solves the problem. Somebody still has to generate the event that transitions out of the state doing the requeuing so that the requeued events can get executed. Determining when to issue that event is the real problem that suggested using HOLD in the first place. Unfortunately, there is a second difficulty. I think you could get an infinite loop on the queue manager. The requeued event would be a self-directed event, so it would always be processed before the event that transitioned out of the state. > All intervening events must be processed before the tail is re-checked, > whereas if the HELD event is kept at the front of the queue it is (I assume) > re-inspected after a single event is processed. On the other hand, HOLD > implies more dispatching overhead because it is reinspected and verified > as still HOLDing before dispatching EACH next event - or does it?). Actually, I think requeuing would entail a lot more overhead than HOLD. As Wilkie's example pseudo code demonstrated, handling the HOLD could be a simple table lookup whereas requeuing is likely to require more processing. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: RE: (SMU) Anonymous event notification "Lynch, Chris D. SDX" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lahman (agreeing with Rochford, I think) ---------------------------- >I would argue that currently the OCM is fully derived. Therefore the dependencies >reflect the selection of domain objects and the allocation of responsibilities to them. >That happens before the SM is done. So I don't see the connection with the issue here >concerning whether an event should know where it is going. I, too, used to think the OCM was fully derived, i.e., a condensation of your real work for purposes of identifying scenarios for penny pushing to validate your (nearly completed) state-models. That was the way the material was taught to me and was the way I started my first SMOOA project, a pump. In trying to model this way, however, I got into trouble. The objects I had identified had reasonably clear lifecycles if taken as individual parts of the system (the objects were valves, pressure sensors, air sensors, and a fluid-displacement piston.) But when it came to coordinating them to achieve a particular purpose, the thorny issue of a control strategy arose. In some ways the PT ODMS example (i.e., distributed control) looked like a good pattern . But in other scenarios (use-cases) it broke down. We tried several control strategies, each designed to be minimally disruptive to what we all agreed were fine models. We ended up taking several steps back, then building several layers of "controller" objects to coordinate the lowest-level objects. We hammered this out, iterating between the OCM and SM levels. The new control strategy had a profound impact on the previously pristine, hardware-based lifecycles. The reaction to a particular hardware event was now sensitive not only to the state of the hardware but to the identity of the particular control pattern being employed as well. A part of this was that the destination of response-events sent by the hardware objects was also context-dependent, at times a polymorphic event to a controller and at other times a scenario-based notification to related, peer objects. Time and again we asked ourselves if we were "model hacking", to use Leon Starr's term. To check for this we would periodically step back and look for other abstractions which could reduce the complexity of our state models and make them look more like "analysis" and less like "design". But the answer was always no; there were no more abstractions in the problem space to guide us. This exercise taught me a few things: a) One's control strategy (i.e., flow of control at the OCM level) is not likely to fall out of modeling object lifecycles. There eventually needs to be a master plan for object communication. The control patterns are usually invented rather than uncovered. b) IMHO, the prohibition against "design" during the object-modeling is excessive and misleading. Since one can come up with at least two control- strategies for achieving the same behavior in a set of active objects, by definition the part of the models which specifies this is _overspecification_ relative to the problem domain (i.e., design pollution). The overspecification occurs mostly in _generated events_. c) That objects know which instances they are talking to is often essential when one object is controlling another. I know it's not technically "problem analysis", but it is often unavoidable. BTW, I have also heard that various PT folks advocate doing a rough OCM (a "layering", if you like) _before_ identifying life-cycles. Makes sense to me now. -Chris --------------------------------------------------- Chris Lynch Abbott AIS San Diego CA lynchcd@hpd.abbott.com "If you're as clever as you can be when you design it, how will you ever debug it?" Kernighan and Plauger --------------------------------------------------- Subject: RE: (SMU) ESP Article (Kennedy-Carter's HOLD in State Transition "Lynch, Chris D. SDX" writes to shlaer-mellor-users: -------------------------------------------------------------------- I would argue that if an "event" comes at a point in my lifecycle at which I want to save it for later, it is not part of the lifecycle, but rather a separate data-stream. Thus, a separate queue (i.e., not the event queue) is a better place for the state machine to get this information. This has the benefit of retaining the important properties that events are FIFO and without priority. -Chris --------------------------------------------------- Chris Lynch Abbott AIS San Diego CA lynchcd@hpd.abbott.com "If you're as clever as you can be when you design it, how will you ever debug it?" Kernighan and Plauger --------------------------------------------------- Subject: Re: (SMU) Anonymous event notification lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Wright... > I was recently being given the once-over by a senior Rational > consultant about the RUP etc and I suddenly noticed a Controller > object (on a sequence diagram). Hackles rose immediately. > > I also saw them in the Booch etc book (the Process one, forget its > name). Seems that the standard way of doing things is to have a > Controller object per Use Case, whose job is to coordinate it > all. (It's clearly an artefact, cos there's nothing in the customer's > world that corresponds to it). > > I don't quite know what's wrong with this, except something must be! I > suspect it's rather like the problem you have before you've understood > recursion, or how event-driven GUIs work. Oh, goody! I get to wax enthusiastically about one of my Deep Philosophical Hot Buttons! The problem is that the Controller object is not tending to only its own roses. Flame ON: This is philosophical issue for the 'data-driven' approaches like S-M. We define an object as an encapsulation of data and the processes that operate on that data. (This also happens to be the more traditional definition introduced in the '70s and early '80s.) The definition used in the responsibility based approaches is the an object encapsulates functionality and the data that supports that functionality. Thus the data driven types think about what objects _are_ while the responsibility driven folks think about what objects _do_. If you focus on what an object _is_ and only express functionality in state machines, the solution tends to be well partitioned and encapsulated. One does not worry about what others are doing. (The basis of my dislike of an action knowing where an event is going.) I believe this leads to robust and maintainable systems. Though Jacobsen specifically warns against doing functional decomposition when using use cases to design, this is easier said than done when one only thinks about what objects _do_. This is because one thing they can do is tell somebody else what to do. This is not helped at all by the fact that all of the OOPLs have copped out and equated message with method. It is subtle but pervasive that when you call a method the paradigm is that you are telling somebody what to do. Messages (events) don't have that problem and, in fact, one is encouraged to make them announcements rather than imperatives. In S-M the communication paradigm is "I'm Done" rather than "Do This". So it is no surprise that in the responsibility based methods growing out of use case analysis one starts to see a lot of XXX Controller or YYY Manager spiders that are running the show. Flame OFF: To answer your question... In the data driven approaches one is very focused on true encapsulation. This is not so much about decoupling as about the mindset that views each object independently and places great emphasis on creating the correct abstractions. [Anybody can come up with a collection of functions and wrap a class around it. Witness that our most popular and poorly designed OOPL is C++, which allows legions of C programmers to continue business as usual while converting to OT. But I digress...] It takes work to develop correct abstractions. The payoff for that work is better encapsulation of both data AND functionality. And that leads to more robust and maintainable systems. [Marginally related Apocryphal Aside: Awhile ago I talked to a guy in a C shop where they are rewriting 13 MLOC. He announced that the rewrite would be done from scratch with OT. I said, "Great, what methodology are you going to use?" He said, "UML." I said, "Yes, but what design methodology are you going to use." He looked puzzled and replied, "You know, classes, objects...all that stuff." I'll take Major Failure and give 5:1 odds. Any takers?] -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: RE: (SMU) Anonymous event notification David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- > > How are self-directed events handled? > > Why any differently (other than performance)? The > prioritization is a queue manager implementation > issue once it knows the target FSM is the same as the > origin and it has to know the origin anyway to do > self-directed events. The point is that a self-directed event can be internal to a state model. If you do the event routing in the OIM (OR OCM) then the state model no longer knows that an event is self-directed. Two seemingly identical event generator processes have different behaviour. > > Do you add relationships to events (do you allow > > relationship chains)? > > or do you add 0 or more events to relationships. In this case, can > > all the way-points on the chain receive the events? Do they need > > to explicity ignore events that just pass through them, or do > > they explicitly accept events that they are interested in? > > I would think the events go directly source-to-target instance. The > need to walk the relationship(s) is only to define the set of > target instances. I don't see why how long the chain should matter. Its a notational thing really. For symmetry, I't'd be nice to be able to see what events flow along which relationships. This implies that, in the OOA-of-OOA, each relationship is associated with 0:M events (in fact, each direction may have different events -- so lets associate the event with an end-point of the relationship). As soon as you do this, you realise that event pass though (or possibly bypass) objects as they follow the chain. This leads to the question of having intermediate objects react to the events. If there's no good reason to prevent this, then the power of the notation is increased by allowing it. > I would think S-M's version of polymorphism is unaffected. > The target instance is the target instance is the target > instance... The M just increases the list of targets for > the event and the resolution to the leaf's FSM would be > handled the same way as usual. If I look at the OOA-of-OOA, then I see that there would be two event mappings: polymorthism and routing. This raises alarms: 2 very similar concepts in 2 places in the model cries out for a greater abstraction. Polymorthism is the association of events with relationship end-points on the is-a; routing the the association of events with relationship end-points on other relationships. This does not seem, IMHO, to be 2 concepts. > In fact, I think the answer to all of these issues is that > conceptually the mapping takes place before before the > event is placed on the queue. I don't think it really matters when you do the mapping. Just choose it, and be consistent. My 2 points are: (a) self-directed events are different ... the state model explicity says that they are not routed and (b) routing is a generalisation of polymorthism. Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: Re: (SMU) ESP Article (Kennedy-Carter's HOLD in State Transition "Leslie A. Munday" writes to shlaer-mellor-users: -------------------------------------------------------------------- On Thu, 11 Nov 1999 01:55:50 Bob Lechner wrote: >Bob Lechner writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >Kennedy-Carter's HOLD is a third alternative: it can retain completion-events >from higher fork numbers in the input queue until lower-numbered forks report >completion. Transitions will occur from states with HOLD actions to states with >enabled transitions on the HELD event. This results in acceptance of the HELD >events in some pre-specified order. > >Could a more conventional State Model interpreter handle this? >One way would be for the state action for an event with a 'HOLD' on it >to simply re-queue or regenerate this same event, with the same data >values, by putting it back on the queue BEHIND other events already there. >This is not equivalent to HOLDing it, because HOLD (I presume) keeps >the HELD event in front of the queue not at its tail. >Of course, data from the deferred event must be retained when re-queuing it. >(In our naive implementation, the event is not deleted until AFTER the >action returns, so event data can be copied into object data >members or into generated events.) > >All intervening events must be processed before the tail is re-checked, >whereas if the HELD event is kept at the front of the queue it is (I assume) >re-inspected after a single event is processed. On the other hand, HOLD >implies more dispatching overhead because it is reinspected and verified >as still HOLDing before dispatching EACH next event - or does it?). I have been doing S-M 'like' modeling for many years and it is in this thread that I first came across the HOLD action. Thinking about what I would do, as an analyst, if the HOLD situation occurs and I had never heard of the HOLD action, here's my take: To me, an event occurs at any instantaneous moment in time and has no substance (it may however have associated data ). So an event cannot be HELD. If an event occurs and nobody wants it the event is lost. (As an aside, if an event occurs and nobody wants it, I'm going back to my analysis model to find out what I did wrong.) The idea of queueing events is a little abhorrent to me. I certainly do not model event queues in my analysis models. (I like to think of event queues as an implementation feature which are required because we are introducing hardware into our model. Anyway that's another thread for discussion.) If an event occurs and some object wants to know that it occured, at a later date then there needs to be an object that is looking for that event. That object can record the occurrence of the event as a flag or status type variable. Alternatively the object looking for the event will itself change state as a result of the event (every event causes a state change, even if it's back to the same state) and can then monitor the receiving objects to determine when to send them an appropriate event. If we forget about any sort of event queue, then the HOLD action appears to be equivalent to the receiving object (who wants to know about the event in the future) setting a status flag, which can be checked when the object has finished what it's doing to determine if it should generate an event to itself. The fact that we're creating an artificial status flag, makes me think that we're poluting our domain, unecessarily. Better to have the object whose job is to recognize the receipt of event and act appropriately at the appropriate time. Which leads me to: The thread 'Anonymous event notification', closely related, I think. The idea of objects making announcements and having interested party objects goes against my all my analysis experience. In analysis, one models the minimum necessary to make the thing work. No more, no less (idealy). So bearing this in mind an object will only generate an event if the object knows that someone wants that event. Hence the object generating the event will always know who the event is going to. The object receiving the event does not know where the event came from. Broadcasting an event is not allowed, because the broadcaster does not know where the event is going (I hope I haven't just made a circular argument here). Within a single domain this will always be true in one of my models. Now where my ignorance comes in to play is if sending events across domains. I'm going to assume that the sending object knows something about the bridge that is receiving the event. The bridge knows how to respond to the event by activating the appropriate object in the other domain. Hence my rules are not broken. Perhaps someone would like to enlighten me as to how bridges can handle events between domains. Well that's my contribution. Cheers, Leslie. __________________________________________________________________ Get your own free England E-mail address at http://www.england.com Subject: RE: (SMU) Anonymous event notification David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- > Gregory Rochford wrote: > Ah, engineers, always changing things. :) > > I think I understand what you want to do. (At least > as far as using relationship instances to direct events) > My first thought is: > > What is the cost/benefit of maintaining relationships > with all instances that event communication occurs with vs. > finding the identifier of the destination each time > an event needs to be sent? > > If an instance is typically in a relationship with the > destination instance, then this could be a win. > > And on the negative side: > > How many relationships are added to the OIM just for > event communication? What we need is an example! Is the ODMS example on the web anywhere? My training notes are somewhere in a storage box in England! > How much additional benefit is there in decoupling > the state models in a domain? Do people really > reuse objects at the domain level? It not so much about reuse: its about maintenance. What is the correct localisation for directing events? If the same direction appears in many state actions then it would be better to put it elsewhere. If this is of no benefit, then we can fall back to methodology. PT suggests that a god way to think about communication within a domain is to sketch out a preliminary OCM. Perhaps it is better, from a the methodology perspective, to formalise the routing of events at that time. > I meant a protocol such as the events that pass between the > Clerk/Customer and the Clerk-Customer Assigner. I don't think > there's anything there that's solution based. But anyway... Its interesting you should mention Assigners. Assigners were moved from assoc-objects to relationships in OOA96. Now we're suggesting moving events as well. Lets look at the customer- clerk example: (1) customer: "I need help" (2) clerk: "I'm ready to serve someone" (3) assigner (to clerk): "that customer needs help" (4) clerk (to customer): "I can help you" ... >From the perspective of relationship-based routing: The initial event are sent to the relationship itself. The assigner creates the relationship-instance (the routing path) and then tells the clerk to use it. If we try to direct events using the relationships themselves, then we immediately hit a problem: there can be no instance of an assigner->clerk relationship. So how does this event get routed? If we ignore this probelm, then event(4) works perfectly: the relationship exists, so the event can simply be sent! So what do we do about (3)? Well, the new mechanism explicitly associates event routing with relationships, and assigners are also relationship based. There seems to be enough commonality here to tentitively suggest that assigners should be allowed to specify event routing in their generators. This is the inverse of the current situation, where assigner events have no identifier. If we start naming events for their source, not destination, then it all falls into place. If I were brainstorming, I might also point out that event (3) is simply a delayed version of (1): its delayed by the relationship, which is where we're doing routing. I'm not sure where that observation might lead. It feels that there's something important hiding there. Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: RE: (SMU) Anonymous event notification David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- > From: Leslie A. Munday [mailto:lmunday@england.com] > Which leads me to: > The thread 'Anonymous event notification', closely related, I think. > > The idea of objects making announcements and having > interested party objects goes against my all my analysis experience. > > In analysis, one models the minimum necessary to make the > thing work. No more, no less (idealy). Yes, but the debate here is about what that minimum is. > So bearing this in mind an object will only generate an event > if the object knows that someone wants that event. Hence the > object generating the event will always know who the event is > going to. But how does the object know who wants the event? There is always a relationship between sender and receiver. It might not be on the OIM, but a relationship verb phraze could be "is sending event to" @-)! Usually, there's a much better relationship available! If an object always finds out where to send an event by accessing a relationship, then that "always" can be moved to a more global scope. The model isn't changed, we're simply writing down the analyst's knowledge in a different way. > The object receiving the event does not know where the event > came from. I think we can agree on that! > Broadcasting an event is not allowed, because the broadcaster > does not know where the event is going (I hope I haven't just > made a circular argument here). The event is not really broadcast. The data in the model determines exactly where the event goes, and that data is identical with the data that would be used by the object to determine the destination of its event. The model knows where the event goes, so the state-action doesn't need to duplicate that information The is no "observer" pattern, where objects dynamically register an interest in random events. The OIM defines the relationships: instances of relationships route intances of events. > Now where my ignorance comes in to play is if sending events > across domains. > > I'm going to assume that the sending object knows something > about the bridge that is receiving the event. The bridge > knows how to respond to the event by activating the > appropriate object in the other domain. > > Hence my rules are not broken. Perhaps someone would like to > enlighten me as to how bridges can handle events between domains. One view is that events aren't send: you either invoke wormholes or things are mapped with implicit bridging. Either way, identifier mappings can be maintained by "half-tables" in the bridge (I don't really like that term, but its what people seem to use). The half-table usually maps identifier+metaObject in one domain to identifiers in another. Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: Re: (SMU) Anonymous event notification Gregory Rochford writes to shlaer-mellor-users: -------------------------------------------------------------------- You're cutting into my sleep time Dave :) David.Whipp@infineon.com wrote: > > David.Whipp@infineon.com writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > > Gregory Rochford wrote: > > > Ah, engineers, always changing things. :) > > > > I think I understand what you want to do. (At least > > as far as using relationship instances to direct events) > > My first thought is: > > > > What is the cost/benefit of maintaining relationships > > with all instances that event communication occurs with vs. > > finding the identifier of the destination each time > > an event needs to be sent? > > > > If an instance is typically in a relationship with the > > destination instance, then this could be a win. > > > > And on the negative side: > > > > How many relationships are added to the OIM just for > > event communication? > > What we need is an example! Is the ODMS example on the web > anywhere? My training notes are somewhere in a storage box > in England! > They weren't important enough to hand carry on the plane? For shame! :) > > How much additional benefit is there in decoupling > > the state models in a domain? Do people really > > reuse objects at the domain level? > > It not so much about reuse: its about maintenance. What > is the correct localisation for directing events? If the > same direction appears in many state actions then it > would be better to put it elsewhere. > But does adding this additional whatever it is make the method more difficult to use? It's hard enough already for most people :) How much of this is a tooling (or lack of tooling) problem? > If this is of no benefit, then we can fall back to > methodology. PT suggests that a god way to think about > communication within a domain is to sketch out a > preliminary OCM. Perhaps it is better, from a the > methodology perspective, to formalise the routing > of events at that time. > My analyst gut reaction is I want to send an event to any instance, without having to form a relationship with it first! Isn't that a waste of time? But then I realize it's just one more rule on top of the 150 other rules in Shlaer-Mellor. And those other rules are there for the analyst's benefit, so why not one more? :) The UML'ers don't seem to mind having to have a link before sending a message. (So maybe this is a reason not to do it ) > > I meant a protocol such as the events that pass between the > > Clerk/Customer and the Clerk-Customer Assigner. I don't think > > there's anything there that's solution based. But anyway... > > Its interesting you should mention Assigners. Assigners were > moved from assoc-objects to relationships in OOA96. Now we're > suggesting moving events as well. Lets look at the customer- I think this is a non sequitor. The placement of the Assigner state machine was changed from the associative object to the relationship itself. That's because the assigner manages competition along the relationship, and people didn't want to add an associative object whose only reason for existence was a placeholder for an assigner. So assigners didn't "move" per se. > clerk example: > > (1) customer: "I need help" > (2) clerk: "I'm ready to serve someone" > (3) assigner (to clerk): "that customer needs help" > (4) clerk (to customer): "I can help you" > ... > > >From the perspective of relationship-based routing: > > The initial event are sent to the relationship itself. The > assigner creates the relationship-instance (the routing > path) and then tells the clerk to use it. > > If we try to direct events using the relationships > themselves, then we immediately hit a problem: there > can be no instance of an assigner->clerk relationship. > So how does this event get routed? If we ignore this > probelm, then event(4) works perfectly: the relationship > exists, so the event can simply be sent! > > So what do we do about (3)? Well, the new mechanism > explicitly associates event routing with relationships, > and assigners are also relationship based. There seems > to be enough commonality here to tentitively suggest > that assigners should be allowed to specify event > routing in their generators. This is the inverse of > the current situation, where assigner events have > no identifier. > > If we start naming events for their source, not > destination, then it all falls into place. > Destination based naming is a convention. It was (and is) the one that seemed to work the best. I'm missing how you notate transitions on the receiving state model when it can receive the "same" event from three different objects? Or are there now three transitions? > If I were brainstorming, I might also point out that > event (3) is simply a delayed version of (1): its > delayed by the relationship, which is where we're doing > routing. I'm not sure where that observation might > lead. It feels that there's something important hiding > there. > The assigner mananging the competition in the relationship? Subject: RE: (SMU) Anonymous event notification lschneid@eng.delcoelect.com (Lee Riemenschneider) writes to shlaer-mellor-users: -------------------------------------------------------------------- >David.Whipp@infineon.com writes to shlaer-mellor-users: >-------------------------------------------------------------------- >> From: Leslie A. Munday [mailto:lmunday@england.com] >> So bearing this in mind an object will only generate an event >> if the object knows that someone wants that event. Hence the >> object generating the event will always know who the event is >> going to. > >But how does the object know who wants the event? > Shouldn't the analyst know? ;-) This brings up a question. If there was a requirement for an event to cause an action that should generate an event to different object's instances, our "broadcast" event, can we write the action to generate the required amount of the "event", directed to each object, and assume that the events are simultaneous? I think that we should be able to do this, as long as we pass our assumption to the architect, so a broadcast message is generated. I'm a lot more comfortable with this concept, of knowing where the event is going when generated, in the analysis, and leaving the implementation up to the architect. Subject: (SMU) Hold event after transistion "Dana Simonson" writes to shlaer-mellor-users: -------------------------------------------------------------------- >>>> 11/11 3:26 PM >>> > >As soon as you do this, you realize that event pass though (or >possibly bypass) objects as they follow the chain. This leads >to the question of having intermediate objects react to the >events. If there's no good reason to prevent this, then the >power of the notation is increased by allowing it. If I were to have dozens of events that might happen while an object is not in a state where it can handle them, and, to be handle them it must migrate to another subtype, would it be possible to have a event 'HELD' while at the same time allow that event to cause a transition and associated action? Example: Events E1 through E99 are directed at Supertype A, Subtype B. When any one of these events occurs, there is some cleanup to do and the object migrates to Subtype C. Subtype C then needs to process the event. My understanding of the pure SM method would require state S1-S99, each handling a single event, doing the same cleanup and migration followed by generation of a specific event E1' through E99' back to the supertype. In the case I am thinking of, due to memory limitations that prevented the size expansion of the transition table, the team added a requeue process that simply put whatever event caused the action to run back on the queue. This would have been unnecessary if the event could have been marked as hold after doing the action. The general case would be to change the state transition table to be a combination of logicals: Monospaced font: Consume Transition Exception TRANSITION true true false IGNORED true false false CAN"T HAPPEN true false true HOLD false false false HOLD w/ACTION false true false Thoughts? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Dana Simonson Section Manager, Engineering Waseca Operations Center E. F. Johnson Company dsimonson@efjohnson.com www.efjohnson.com Subject: RE: (SMU) Anonymous event notification "Leslie A. Munday" writes to shlaer-mellor-users: -------------------------------------------------------------------- On Fri, 12 Nov 1999 11:02:21 Lee Riemenschneider wrote: >lschneid@eng.delcoelect.com (Lee Riemenschneider) writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >>David.Whipp@infineon.com writes to shlaer-mellor-users: >>-------------------------------------------------------------------- >>> From: Leslie A. Munday [mailto:lmunday@england.com] >>> So bearing this in mind an object will only generate an event >>> if the object knows that someone wants that event. Hence the >>> object generating the event will always know who the event is >>> going to. >> >>But how does the object know who wants the event? >> >Shouldn't the analyst know? ;-) > Not sure how serious the question and answer are, but yes, the analyst has to know. The event is only modeled if there is an object that is asking for it. The hard part is working out which object's going to generate it. >This brings up a question. If there was a requirement for an event to cause >an action that should generate an event to different object's instances, our >"broadcast" event, can we write the action to generate the required amount of >the "event", directed to each object, and assume that the events are >simultaneous? I think that we should be able to do this, as long as we pass >our assumption to the architect, so a broadcast message is generated. > This is how I do it: I have an event which is required by lots of object instances. An object responds to this event and transitions to a state where it can take the appropriate action. The action, in this case is to send the corresponding event to all the instances that need to know the event occured. When all of the required 'broadcast' events have been generated the object transitions to the next state, waiting fore the next event to occur. I actually do this a lot in my current job. Before I demonstrate, let me explain that I am using Rational Rose :-( and Mealy state machines, because I find these easier to draw. (Putting actions into states on RR is a real pain.) Our products deliver orders to a customer in batch files. The orders go through various states, 'New', 'Editing', 'Sending', 'Received', 'Error' for example. The 'New' and 'Editing' states are entered via an external event through a UI, for example. The 'Sending', 'Received' and 'Error' states are entered, depending upon the processing of the batch file containing the order. The 'Batch File' object generates the appropriate events to the apropriate 'Orders' depending upon the state of the batch file. So the 'Batch File' goes from 'Sending' to 'Received' due to an external stimulus. I introduce an interim state called 'Receiving', where the 'Batch File' is sending out all the appropriate events to all the 'Orders' that need to know this. [I represent this as a transition, from the 'Receiving' state back to the 'Receiving' state, which generates the event (Mealy). This transition is executed for every instance requiring the event.] When all the appropriate events are sent (notice no assumption is made about ordering of events in this notation) the 'Batch File' transitions to 'Received', and all of the orders are in 'Received'. >I'm a lot more comfortable with this concept, of knowing where the event is >going when generated, in the analysis, and leaving the implementation up to >the architect. > > I think the looped transition back to the same state will indicate to the architect that some clever design work could be done to make this action efficient. Anyone want to speculate how you could represent this using Moore machines? Leslie. __________________________________________________________________ Get your own free England E-mail address at http://www.england.com Subject: RE: (SMU) Anonymous event notification David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Gregory Rochford wrote > But does adding this additional whatever it is make > the method more difficult to use? It's hard enough > already for most people :) Or does it make it easier? When you construct an OCM, you know where events are going to/from. I'm sure everyone's had the experience of discovering missing interaction object as a result of cleaning up the OCM, so why not use it as a way to discover additional relationships (and even missing assigners!). Once all the relationships are in place, doing the state models should be even easier than at present! > My analyst gut reaction is I want to send an event > to any instance, without having to form a relationship > with it first! Isn't that a waste of time? Thus spake the finely honed SM machine. In the vast majority of cases, the relationship is already there. If you follow a [differnt] methodology in which the relationships/event associations are in place before you start state modeling, then there is no waste of time. > But then I realize it's just one more > rule on top of the 150 other rules in Shlaer-Mellor. There is a difference between imposed rules, and facts in the meta-model (OOA-of-OOA). If the meta model says that events are directed along relationships, then the need to construct the relationship becomes a consequence, not an imposition. I want the OOA-of-OOA to be a thing of beauty. Imposed rules tend to show up as warts. (For example, allowing someone to assign multiple values to an attribute would add complexity to the OOA-of-OOA. So the rule of one acts to simplify that model. Conversely, the concept of link/unlink on relationships breaks that rule and adds complexity to the meta model. So I regard that as a bad concept.) > That's because the assigner manages competition > along the relationship, and people didn't want to add an > associative object whose only reason for existence was a > placeholder for an assigner. So assigners didn't "move" > per se. Sometimes, a change that makes no difference can have effects beyond that change. The whole concept of refactoring is based on that premise: you refactor without changing anything to allow you to change other things more easily. > Destination based naming is a convention. It was (and is) > the one that seemed to work the best. I'm missing how > you notate transitions on the receiving state model > when it can receive the "same" event from three > different objects? Or are there now three transitions? There are a number of possible answers: 1. Its like the difference between Mealy and Moore: the two are equivalent. Mealy has more complex transitions; Moore has more states 2. Its like polymorphic events: objects define a mapping between the event they receive and the one they forward 3. Its like process naming: you associate an event with a specific object (where it makes sense) but it can be used anywhere (sent or received). I'm sure I could think of other alternatives if pressed. > > If I were brainstorming, I might also point out that > > event (3) is simply a delayed version of (1): its > > delayed by the relationship, which is where we're > > doing routing. I'm not sure where that observation > > might lead. It feels that there's something important > > hiding there. > > The assigner mananging the competition in the relationship? I'll assume a smiley there. I was actually looking a bit deeper: its seems like routing and assigning (delaying?) may be subtype behaviours of a more abstract concept. Perhaps there's more to relationships than just a referential attribute. Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: Re: (SMU) Anonymous event notification lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Rochford... A slightly different spin than Whipp's... > What is the cost/benefit of maintaining relationships > with all instances that event communication occurs with vs. > finding the identifier of the destination each time > an event needs to be sent? (1) It allows a very efficient notation for the navigation; it is reduced to identifying the specific chain of relationship identifiers. (2) It defines the conditionality of the selection in the static model. (3) It simplifies the actions. In an ADFD we typically have 60-80% of the processes associated with navigations. This is distracting from the action's task for both event generation and data store access. The action should describe how data is manipulated, not where it comes from. (4) It decouples the domain flow of control (events destination) from the object functionality (manipulation of data). > If an instance is typically in a relationship with the > destination instance, then this could be a win. I would argue that those relationships probably belong in the IM anyway. As soon as one selects a subset of instances it seems to me that one is indicating that those particular instances have a special significance for the object doing the selecting. That significance should probably be described in a relationship. > How many relationships are added to the OIM just for > event communication? Anticipating the argument that it might add clutter to the IM, I would suggest that excess relationships between two objects might indicate an incorrect allocation of responsibilities among the objects or that the objects have too many responsibilities and should be split up. For example, using this approach a typical spider object would probably have a very large number of such relationships. > How much additional benefit is there in decoupling > the state models in a domain? Do people really > reuse objects at the domain level? I also don't think it is an issue of reuse. When the responsibilities are allocated to objects, those responsibilities are determined by the specific application. Thus the dynamic abstraction of the life cycle itself tends to be domain specific. The benefit of decoupling would come in performing maintenance. By separating the concerns of relationship navigation and the concerns of domain flow of control, both of which are context for the FSM, from data manipulation and the object life cycle it seems intuitive that the system should be more maintainable. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) OCM layering lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Rochford... > See OL:MWS p 89ff. OK, it has been a decade since I read it so the phrase didn't ring a bell. > In the State modeling course, there was a whole section on doing a > preliminary OCM > before you started doing state models. Instructors tended to emphasize > if you didn't > get this right, you state models would be a mess. It sounds like your > third pass > is where this happens. (And it sounds like you usually get it right :) That is correct. But we don't think about particular events or the OCM per se in that pass. We tend to think about it at the level of, "The Test understands a digital pattern but the Channel doesn't, so Test will have to notify each relevant Channel about its pin state in the pattern..." What the designer of the Channel FSM takes away from this is that the Channel's level of abstraction is not concerned with digital patterns and somebody will be supplying pin state data. > Since you are such fine (and experienced) analysts, you always pick the > appropriate > resposibilities for the objects that results in state models that have > minimal > dependancies. Try to recall the first time you analyzed a domain. :) > > I'm trying to show that you are (implicitly) making a choice, and that > different > choices lead to different (more or less coupled) state models. It's not > just > luck, man. I don't think luck has anything to do with it. In my view the key to designing instance FSMs is to remain focused on the inherent life cycle abstraction of the object. We go through a suite of iterations that refine that view until we are ready to write FSMs. I see a danger in driving that FSM design from the OCM viewpoint because that is a very contextual view. That is, that view contains much more information than simple requirements on a particular state machine. The pitfall is that by designing the OCM first one is explicitly defining the domain flow of control and that starts to define how the state machines are designed. The domain flow of control should be driven by the inherent properties of the state machines. Now remaining focused solely on the FSM abstractions has its own set of problems. One has to have some clue about inter-object communications or a lot of adjustments will have to be made to communicate. But we want to think about that as abstractly as possible, so we choose to think about it in terms of object requirements (as in my example above). We also buy into the notion of iterating over object abstractions and domain flow of control. If there is a detailed mismatch when it comes time to generate events, we want to correct that by refining about the object abstractions rather than simply modifying state machines to fit a preconceived notion of flow of control at the OCM level. The end result is superficially the same (i.e., a state machine gets changed) but the way it is done is quite different. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Anonymous event notification lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lynch... > This exercise taught me a few things: > > a) One's control strategy (i.e., flow of control at the OCM level) is not > likely to fall out > of modeling object lifecycles. There eventually needs to be a master plan > for > object communication. The control patterns are usually invented rather > than uncovered. I agree. We effectively iterate over the allocation of responsibilities and even use cases. This provides the 'master plan' that drives the definitions of the FSMs. We just avoid doing an OCM first because it is not abstract enough and it is easy to superimpose a preconceived flow on control on the FSMs that is not driven by the abstractions themselves. We also iterate between the FSMs and the domain flow of control, but that iteration is driven from the abstractions rather than the OCM. > b) IMHO, the prohibition against "design" during the object-modeling is > excessive and misleading. Since one can come up with at least two control- > strategies for achieving the same behavior in a set of active objects, by > definition > the part of the models which specifies this is _overspecification_ relative > to the problem > domain (i.e., design pollution). The overspecification occurs mostly in > _generated events_. It seems to me that you are arguing my view here. B-) There are, indeed, many ways to solve the problem at the level of domain flow of control (i.e., the OCM). However, selecting the 'best' one, if some are somehow better than others, should be driven by the object abstractions because they reflect the problem space. OTOH, if there is no objective means to determine whether one strategy is better than another at model level, I would not call selecting one 'overspecification'. It is simply selecting an equivalent alternative. > c) That objects know which instances they are talking to is often essential > when one > object is controlling another. I know it's not technically "problem > analysis", but it is > often unavoidable. I haven't seen a definitive example of this. However, I suspect it is one of those things that is difficult to demonstrate via EMail. Any alternative is quickly refuted on the grounds of other domain knowledge. That is, one has to get into the whole domain (or even application) before a constructive alternative (if any) could be presented. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Anonymous event notification lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > The point is that a self-directed event can be internal > to a state model. If you do the event routing in the > OIM (OR OCM) then the state model no longer knows that > an event is self-directed. Two seemingly identical > event generator processes have different behaviour. I don't see this distinction. The event generator simply generates an event. From the viewpoint of the state action there is no difference in what has happened. And the state of a model after accepting an event does not know where the event came from. I don't see that the state model knows anything about self-directed events. It is only to the analyst defining the domain's flow of control that there is a difference and that difference is an architectural enforcement of S-M's rules that the analyst must accommodate in defining the flow of control. > > I would think the events go directly source-to-target instance. > > Its a notational thing really. For symmetry, I't'd be nice to > be able to see what events flow along which relationships. This > implies that, in the OOA-of-OOA, each relationship is associated > with 0:M events (in fact, each direction may have different > events -- so lets associate the event with an end-point of > the relationship). But that information is still available when the navigation is supplied. One could have a derived diagram like the OCM present it, if necessary. > As soon as you do this, you realise that event pass though (or > possibly bypass) objects as they follow the chain. This leads > to the question of having intermediate objects react to the > events. If there's no good reason to prevent this, then the > power of the notation is increased by allowing it. I can't quickly think of a reason why not, other than complicating the Analyst's life considerably, but it kind of screams Pandora's Box to me. B-) > > I would think S-M's version of polymorphism is unaffected. > > If I look at the OOA-of-OOA, then I see that there would > be two event mappings: polymorthism and routing. This > raises alarms: 2 very similar concepts in 2 places in the > model cries out for a greater abstraction. Polymorthism > is the association of events with relationship end-points > on the is-a; routing the the association of events with > relationship end-points on other relationships. This does > not seem, IMHO, to be 2 concepts. I still don't see how polymorphism is affected at all by the routing notation. In S-M all polymorphism does is provide a mapping of a supertype event to a specific subtype event. That happens after the event has been routed (in the OOA) to the supertype. The events are still going to exactly the same place they would have gone if they were generated individually in the state action in the present system. > My 2 points are: (a) self-directed events are different ... > the state model explicity says that they are not routed > and (b) routing is a generalisation of polymorthism. I can agree with both points. I just don't see how they change because one has modified where one specifies the navigation. It seems to me that all that has changed is how the translation gets the information to do exactly what it does now. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) ESP Article (Kennedy-Carter's HOLD in State Transition lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lynch... > I would argue that if an "event" comes at a point in > my lifecycle at which I want to save it for later, it is > not part of the lifecycle, but rather a separate data-stream. An interesting idea, but... When does an event become a separate data stream? When the sender generates it, the sender thinks it is an event. In the HOLD proposal it would migrate if the event queue tried to process it when the state machine was in a particular state. Are you suggesting that when the Analyst assigns a receiver for the event that it also be designated as a separate data stream? That is, it would always be a separate data stream and the state model would somehow know when to process it. If so, then how would the criteria for when the data stream is processed be defined? > Thus, a separate queue (i.e., not the event queue) > is a better place for the state machine to get this > information. This has the benefit of retaining the > important properties that events are FIFO and without > priority. Isn't this just an architectural issue? That is, Wilkie's pseudocode for the event queue seemed to preserve this as well. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Anonymous event notification lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Riemenschneider... > This brings up a question. If there was a requirement for an event to cause > an action that should generate an event to different object's instances, our > "broadcast" event, can we write the action to generate the required amount of > the "event", directed to each object, and assume that the events are > simultaneous? I think that we should be able to do this, as long as we pass > our assumption to the architect, so a broadcast message is generated. Presently S-M offers no guarantee on the sequencing of events except between the same FSMs. You are correct that one thing the 'broadcast' form of event generation brings to the table is the possibility of ensuring that all the events get placed upon the relevant queues immediately (i.e., before any other events). If one has a single event queue per domain, this would mean that all the broadcast events would be placed consecutively on the queue before any others. If one has a queue per instance, then all relevant instances' queues would have to be blocked until all the broadcast events were placed. However, I am not sure of the benefit of this. If one has separate instance queues, one still cannot predict when all the events will be processed because there could already be an arbitrarily large number events on any one of the queues. If one cannot predict when all are done in some way at the OOA level, then one will still need wait states, response event counting, etc. to figure out when it is safe to continue processing. [In the single domain queue case if one places a single 'response' event on the queue after broadcasting the events, one knows it will be processed after all broadcast events.] One does get the benefit of ensuring that no domain external events are placed on the queues before all the broadcast events are placed. But since one couldn't prevent such external events from being placed on the queue just before the broadcast, I have a hard time coming up with a situation where this would be useful. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Hold event after transistion lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- REsponding to Simonson... > Events E1 through E99 are directed at Supertype A, Subtype B. > When any one of these events occurs, there is some cleanup to do and the object migrates to Subtype C. > Subtype C then needs to process the event. > > My understanding of the pure SM method would require state S1-S99, each handling a single event, doing the same cleanup and migration followed by generation of a specific event E1' through E99' back to the supertype. Yes, I think that is what S-M dictates since an event can go only to a single state model. > In the case I am thinking of, due to memory limitations that prevented the size expansion of the transition table, the team added a requeue process that simply put whatever event caused the action to run back on the queue. This would have been unnecessary if the event could have been marked as hold after doing the action. Doesn't this problem go away if the subtype migration is handled via a synchronous create(C)/delete(B) in the B action that consumes En? This comes down to what is to be accomplished by the En' event. If that event simply updates C's attributes or moves it to a new state, B could do that initialization as part of the synchronous migration. If the transition action invoked by En' generates an event, then that could also be done in B since B knows everything C until C processes some other event or somebody else writes its attributes. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: RE: (SMU) Anonymous event notification "Lynch, Chris D. SDX" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lahman: >> c) That objects know which instances they are talking to is often essential >> when one >> object is controlling another. I know it's not technically "problem >> analysis", but it is >> often unavoidable. > >I haven't seen a definitive example of this. However, I suspect it is one of those >things that is difficult to demonstrate via EMail. Any alternative is quickly >refuted on the grounds of other domain knowledge. That is, one has to get into the >whole domain (or even application) before a constructive alternative (if any) could >be presented. I think PT's discussion of OCM layering for the juice plant illustrates the utility of instance-directed events very nicely. Perhaps I overreach when I say it's essential, but it does seem very natural to do so. -Chris --------------------------------------------------- Chris Lynch Abbott AIS San Diego CA lynchcd@hpd.abbott.com "If you're as clever as you can be when you design it, how will you ever debug it?" Kernighan and Plauger --------------------------------------------------- Subject: RE: (SMU) ESP Article (Kennedy-Carter's HOLD in State Transition "Lynch, Chris D. SDX" writes to shlaer-mellor-users: -------------------------------------------------------------------- >> I would argue that if an "event" comes at a point in >> my lifecycle at which I want to save it for later, it is >> not part of the lifecycle, but rather a separate data-stream. > >An interesting idea, but... When does an event become a separate data stream? >When the sender generates it, the sender thinks it is an event. In the HOLD >proposal it would migrate if the event queue tried to process it when the state >machine was in a particular state. As an introductory aside, I should start off by saying that I do not use event priority of any kind in SMOOA (not even for self-directed events.) To answer your question, I don't think of "being an event" and "being a separate data-stream" as being mutually exclusive. The model reads two event streams, one implicit in the state model and the other explicit as a collection of instances, either ordered or unordered, depending on the application. This is consistent with the PT injunction that events never have priority (except those self-directed) and the PT guarantee that events passing between two instances arrive in the order of transmission. What I don't like about the HOLD idea is that it puts the onus on the analyst to maintain the guarantee of identical ordering. > >Are you suggesting that when the Analyst assigns a receiver for the event that it >also be designated as a separate data stream? That is, it would always be a >separate data stream and the state model would somehow know when to process it. No, not always, if I understand your question. The situation which inspired the HOLD feature is an incompatibility between the actual event arrival pattern and the structure (topology) of the analyst's state model. If this type of mismatch did not need to be in my models then the standard event processing would suffice and I would use it. OTOH, if in a state model I had to deal with the arrival of events which were largely unrelated to the receiver's current state (e.g., a related object throwing a malfunction event) I might have the sender put those malfunction events into instances so the receiver could finish what he (she?) was doing before processing it. >If so, then how would the criteria for when the data stream is processed be defined? The receiver is related to a (probably passive) event object and processes instances of this whenever it is convenient or necessary. > >> Thus, a separate queue (i.e., not the event queue) >> is a better place for the state machine to get this >> information. This has the benefit of retaining the >> important properties that events are FIFO and without >> priority. > >Isn't this just an architectural issue? That is, Wilkie's pseudocode for the event >queue seemed to preserve this as well. Yes, there is an architectural impact of an additional service. However, you also lose the airtight guarantee of event ordering between the same two instances. -Chris --------------------------------------------------- Chris Lynch Abbott AIS San Diego CA lynchcd@hpd.abbott.com "If you're as clever as you can be when you design it, how will you ever debug it?" Kernighan and Plauger --------------------------------------------------- Subject: RE: (SMU) Anonymous event notification David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- "Lynch, Chris D. SDX" wr0te > I think PT's discussion of OCM layering for the > juice plant illustrates the utility of instance-directed > events very nicely. Perhaps I overreach when > I say it's essential, but it does seem very natural > to do so. Stange, I would say that it nicely illustrates the benefits of associating the routing with relationships. For example, there are 3 events between "Batch" and "Juice Transfer". Which instances do these go between? I would guess those instances are related by R3, but I'd have to check the state models to be sure. Wouldn't it ne nice if I could see that information on the OCM (or OIM). JT1 is especially interesting because R3 is 1:M. There is no relationship that tells us which juice transfer is to be done. Should such a relationship be added? My personal feeling is that, without it, there is some information missing from the information model. But when you do add it, you'll notice that Batch is gaining relationships because its a manager- object: perhaps some reorganisation is needed. (aside: if it is added, you might decide to put an assigner on it if you don't want the Batch to decide which Juice Transfer to to.) Dave. Subject: RE: (SMU) Anonymous event notification David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote > I don't see this distinction. [...] I don't see that the > state model knows anything about self-directed events. What a state model sends itself an event, then it knows which transition will be used to move out of the current state. That guarantee is the reason for self-directed events. A maintainer would want to be aware of this distinction from the state model. > > (a) self-directed events are different ... > > the state model explicity says that they are not routed > > and (b) routing is a generalisation of polymorthism. > > I can agree with both points. I just don't see how they > change because one has modified where one specifies the > navigation. I don't know whether they have to change. But it is clear to me that, when modifying the OOA-of-OOA, it is appropriate to investigate how the proposed changes would interact with these existing concepts. If polymorthic mapping is moved from the supertype-object to the supertype-relationship then it should probably become indistinguishable from routing. So it would be appropriate to merge the two mechanisms. Then one might revisit the old questions of whether a subtype tree is one instance or many; and of how many objects in the tree can react to a given event (currently, PT say one, KC say any/all). OTOH, I might be tempted to take the current, unified, concept of event generators and subtype it into events that are queued/routed, and events that are not. I don't know whether the average user would see the changes, but someone writing a simulator or translator almost certainly would. Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: Re: (SMU) Hold event after transistion "Dana Simonson" writes to shlaer-mellor-users: -------------------------------------------------------------------- responding to: >>> lahman 11/14 1:18 PM >>> In the case I am thinking of, due to memory limitations that prevented the size expansion of the transition table, the team added a requeue process that simply put whatever event caused the action to run back on the queue. This would have been unnecessary if the event could have been marked as hold after along with doing the action. [word correction, dynamic not intended] Doesn't this problem go away if the subtype migration is handled via a synchronous create(C)/delete(B) in the B action that consumes En? This comes down to what is to be accomplished by the En' event. If that event simply updates C's attributes or moves it to a new state, B could do that initialization as part of the synchronous migration. If the transition action invoked by En' generates an event, then that could also be done in B since B knows everything C until C processes some other event or somebody else writes its attributes. No. [I wasn't part of this project, so I may be a little 'vague' on the details....] From what I know, the problem was a UI that could operate in several different modes. There were a large number of external events that caused things to happen. In a particular mode, a set of events would require a mode change before the individual events could be processed. Each of the events transitioned to a different state after the mode change (sub-type migration) and performed some significant action. The 'pure' way to do this is to have a receiving state in each mode for each event, which would migrate and generate the same event to the new subtype so that it could be handled. The requeue was simply a shortcut to eliminate states. [Note: The events could also occur when in the 'correct' mode when a migration would not be needed.] So, lets say, in subtype B, events E30 - E50 all migated to C. C then received the event and transitioned to the appropriate state. With the requeue, B had one state that took all of the events E30 - E50, migrated and then resent the event. This eliminated 19 states that would have done the functional equivalent. When this was multiplied by multiple modes and multiple sets of events, a lot of uninteresting states and their associated STT entries were eliminated. (The focus again, was minimal memory footprint.) I had keyed off the phrase: This leads to the question of having intermediate objects react to the events. If this were the case, the above requeue would not have been necessary since the 'true' destination of the event was (the temporarily non-existent) subtype C. B was merely an intermediary whose reaction was to migrate thereby creating C. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Dana Simonson Section Manager, Engineering Waseca Operations Center E. F. Johnson Company dsimonson@efjohnson.com www.efjohnson.com Subject: Re: (SMU) Hold event after transistion lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Simonson... > No. [I wasn't part of this project, so I may be a little 'vague' on the details....] From what I know, the problem was a UI that could operate in several different modes. There were a large number of external events that caused things to happen. In a particular mode, a set of events would require a mode change before the individual events could be processed. Each of the events transitioned to a different state after the mode change (sub-type migration) and performed some significant action. The 'pure' way to do this is to have a receiving state in each mode for each event, which would migrate and generate the same event to the new subtype so that it could be handled. The requeue was simply a shortcut to eliminate states. [Note: The events could also occur when in the 'correct' mode when a migration would not be needed.] This 'pure' solution also requeues the same event; even the target is the same since it would probably be issued to the supertype. As described, though, I think that 're-queuing' is somewhat misleading. In both cases the events placed on the queue could be events known only to C's state machine, rather than re-queuing the original event. It does raise an interesting question, though. Is an event generated in one subtype and received by a migrated subtype of the same instance a self-directed event? That is, can one count on neither B nor C receiving an external event until the En' is processed? I would certainly hope so since, after all, it is the same instance viewed from the supertype. I am still not sure that my synchronous solution won't work here -- at least mechanically. The state of the domain at any instance of time is defined by the state of the attributes and the events on the queue. Therefore, the result of that 'significant action' has to be expressed as the values of attributes and the events on the queue immediately after the re-queued event is processed. [This is just a variation on the notion of unit testing by disabling the event queue -- the correctness of the transition action can be completely determined by comparing all attributes and the event queue before and after execution.] In my synchronous solution I assume that the action will initialize C properly, update any other attributes, and place the necessary events directly on the queue. The drawback, of course, is that in order to do this B is doing processing that is rightfully C's. This is especially clear if C's actions invoke synchronous services. So the action can't be too significant or one is painted into the corner of incorrectly abstracting the object responsibilities. In this case, though, there is a better solution... > So, lets say, in subtype B, events E30 - E50 all migated to C. C then received the event and transitioned to the appropriate state. With the requeue, B had one state that took all of the events E30 - E50, migrated and then resent the event. This eliminated 19 states that would have done the functional equivalent. When this was multiplied by multiple modes and multiple sets of events, a lot of uninteresting states and their associated STT entries were eliminated. (The focus again, was minimal memory footprint.) Somebody has to generate the original E30-E50. That somebody could generate E29 and then the appropriate E30/E50 event in the same action. The E29 would cause the migration and then the E30-E50 would be C's events that would DTRT. So long as E29 and E30/E50 were directed at the supertype they would have to be processed in the correct order (i.e., the order generated) since the same two instances are involved. This would eliminate the re-queuing AND the extra states. > I had keyed off the phrase: > > This leads to the question of having intermediate objects react to the events. > > If this were the case, the above requeue would not have been necessary since the 'true' destination of the event was (the temporarily non-existent) subtype C. B was merely an intermediary whose reaction was to migrate thereby creating C. Yes. And as I think I indicated previously, I am not really sure why I don't like the notion of re-queued events. But they do make me nervous, so I try to find ways to avoid them. B-) -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) ESP Article (Kennedy-Carter's HOLD in State Transition lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lynch... > To answer your question, I don't think of "being an event" and > "being a separate data-stream" > as being mutually exclusive. The model reads two > event streams, one implicit in the state model and the other > explicit as a collection of instances, either ordered or > unordered, depending on the application. What I missed was the notion that you were creating object instances instead of the events. > This is consistent with the PT injunction that events never > have priority (except those self-directed) > and the PT guarantee that events passing > between two instances arrive in the order of transmission. > What I don't like about the HOLD idea is that it > puts the onus on the analyst to maintain the guarantee of > identical ordering. I would think that whether one uses HOLD or instacnes, all bets are off concerning the rules of ordering between instances. Creating an object instance is really just begging the point. From the sender's viewpoint an event it wants to send will be processed out of order -- you have just forced the sender to understand this, which I don't like because... > OTOH, if in a state model I had to deal with the arrival of events which > were largely > unrelated to the receiver's current state (e.g., a related object throwing > a malfunction event) I might have the sender put those malfunction events > into instances > so the receiver could finish what he (she?) was doing before processing it. The sender should have no knowledge of the receiver's state to make this decision. If anybody is going to override the rules of ordering it should be the receiver because the incompatibility lies in the receiver. To me this is a greater evil than overriding the normal event ordering. > The receiver is related to a (probably passive) event object and processes > instances of this whenever it is convenient or necessary. This is a much more complicated mechanism than I originally pictured that invades the IM. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Anonymous event notification lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > > I don't see this distinction. [...] I don't see that the > > state model knows anything about self-directed events. > > What a state model sends itself an event, then it knows > which transition will be used to move out of the current > state. That guarantee is the reason for self-directed > events. A maintainer would want to be aware of this > distinction from the state model. My point is that it only knows what transition to use when it _receives_ the event. The queue manager is the one handling the priorities and that is outside the instance state model. Put another way, if the event generator does not specify the destination, then there is nothing in the state model to indicate the event is directed at itself. For example, if I am not expressing the destination in the state action, I could use an event naming convention where all generated events were Kn and all my state models all showed En transition events. The same mechanism that defined the destination would translate K15 to E99. The maintainer would be aware of this fact when examining the notation (external to the state model) that defined the navigation. > I don't know whether they have to change. But it is clear to > me that, when modifying the OOA-of-OOA, it is appropriate > to investigate how the proposed changes would interact with > these existing concepts. > > If polymorthic mapping is moved from the supertype-object to > the supertype-relationship then it should probably become > indistinguishable from routing. So it would be appropriate > to merge the two mechanisms. Then one might revisit the > old questions of whether a subtype tree is one instance or > many; and of how many objects in the tree can react to a > given event (currently, PT say one, KC say any/all). As it happens, I don't see a need to move polymorphic mapping and reopen that Pandora's Box. The routing ends at the supertype from the viewpoint of the sender. Since there is only one instance, the polymorphic mapping remains just that -- a mapping. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Anonymous event notification lschneid@eng.delcoelect.com (Lee Riemenschneider) writes to shlaer-mellor-users: -------------------------------------------------------------------- >lahman writes to shlaer-mellor-users: >-------------------------------------------------------------------- >Responding to Riemenschneider... > >> This brings up a question. If there was a requirement for an event to cause >> an action that should generate an event to different object's instances, our >> "broadcast" event, can we write the action to generate the required amount of >> the "event", directed to each object, and assume that the events are >> simultaneous? I think that we should be able to do this, as long as we pass >> our assumption to the architect, so a broadcast message is generated. Just to clarify the above: O1_E1 causes a transition to S1 in O1. The action for S1 generates an E2 (broadcast message) to O2, O3, & O4 in the form of O2_E2, O3_E2, & O4_E2 respectively. While these are unique event names, they carry the same content to the 3 object instances. > >Presently S-M offers no guarantee on the sequencing of events except between the >same FSMs. You are correct that one thing the 'broadcast' form of event generation >brings to the table is the possibility of ensuring that all the events get placed >upon the relevant queues immediately (i.e., before any other events). If one has a >single event queue per domain, this would mean that all the broadcast events would >be placed consecutively on the queue before any others. If one has a queue per >instance, then all relevant instances' queues would have to be blocked until all the >broadcast events were placed. However, I am not sure of the benefit of this. > >If one has separate instance queues, one still cannot predict when all the events >will be processed because there could already be an arbitrarily large number events >on any one of the queues. If one cannot predict when all are done in some way at >the OOA level, then one will still need wait states, response event counting, etc. >to figure out when it is safe to continue processing. [In the single domain queue >case if one places a single 'response' event on the queue after broadcasting the >events, one knows it will be processed after all broadcast events.] > >One does get the benefit of ensuring that no domain external events are placed on >the queues before all the broadcast events are placed. But since one couldn't >prevent such external events from being placed on the queue just before the >broadcast, I have a hard time coming up with a situation where this would be useful. > OK. I can see that even if the events were transmitted simultaneously, they wouldn't get processed simultaneously. This is very real world. :-) So, what benefit is gained by having an anonymous event message rather than a smaller action statement in the originating state machine? Subject: Re: (SMU) Hold event after transistion lschneid@eng.delcoelect.com (Lee Riemenschneider) writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Simonson... Had to change the line lengths, as it was really getting ugly. :-) >> No. [I wasn't part of this project, so I may be a little 'vague' on the >>details....] From what I know, the problem was a UI that could operate in >>several different modes. There were a large number of external events that >>caused things to happen. In a particular mode, a set of events would >>require a mode change before the individual events could be processed. >>Each of the events transitioned to a different state after the mode change >>(sub-type migration) and performed some significant action. The 'pure' way >>to do this is to have a receiving state in each mode for each event, which >>would migrate and generate the same event to the new subtype so that it >>could be handled. The requeue was simply a shortcut to eliminate states. >>[Note: The events could also occur when in the 'correct' mode when a >>migration would not be needed.] > >This 'pure' solution also requeues the same event; even the target is the >same since it would probably be issued to the supertype. As described, >though, I think that 're-queuing' is somewhat misleading. In both cases >the events placed on the queue could be events known only to C's state >machine, rather than re-queuing the original event. > >It does raise an interesting question, though. Is an event generated in >one subtype and received by a migrated subtype of the same instance a self- >directed event? That is, can one count on neither B nor C receiving an >external event until the En' is processed? I would certainly hope so since >, after all, it is the same instance viewed from the supertype. Ummm. You can't have a migrated subtype of the same instance, can you? (OL:MWS pg. 46) Although the paragraph under "Leave subtypes and supertypes consistent" is vague on migration, it does say that if you delete an instance of a subtype, you must delete the corresponding instance of the supertype. Since the subtype is a supertype and the supertype is a subtype, I would think that you couldn't actually migrate an instance of subtype with the same instance of supertype. I also have a problem with the concept of events pending to an object that doesn't exist yet. Since the reason was stated as memory constraints, it is domain pollution to try to work around this in the application model. It may require some fancy footwork in the architecture, but I don't see why this should affect the application analysis. Subject: Re: (SMU) Hold event after transistion "Dana Simonson" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to lahman... > It does raise an interesting question, though. Is an event generated in one subtype and received by a migrated subtype of the same instance a self-directed event? That is, can one count on neither B nor C receiving an external event until the En' is processed? I would certainly hope so since, after all, it is the same instance viewed from the supertype. It is in our translator. :-) >Somebody has to generate the original E30-E50. That somebody could generate E29 and then the appropriate E30/E50 event in the same action. The E29 would cause the migration and then the E30-E50 would be C's events that would DTRT. So long as E29 and E30/E50 were directed at the supertype they would have to be processed in the correct order (i.e., the order generated) since the same two instances are involved. This would eliminate the re-queuing AND the extra states. The problem arises in that E30-E50 are external events from another domain. They can occur when A is either B or C (or some other subtype). C always does the same action in response to a given event. If A is currently B, then and only then does the migration from B->C occur. Since the sender can not know what state A is in when it receives the event (nor should it ever know even when it sends the event) it cannot send the migrate event. [Unless we went to the extreme where there were n^2 specific migrate events that moved the object from each subtype to every other subtype, and we created another object to receive the external events and know what subtype could process each event and then sent n migrate events to force the migration to the particular subtype who could process the real event and then sent the event. All subtypes would then have to accept and ignore all the migrate events directed at another subtype (which should be automatic with polymorphic events). But, if the purpose is shrink the STT then this is not a good solution.] So, given that - an event can be received from an external domain when in any subtype - the event must be processed by a specific subtype - the event cannot be lost or ignored I still think the only choices are 1) N extra states in each subtype to migrate and send an event on to the correct subtype 2) HOLD the event but go ahead and transition and process a state to do the migrate > Yes. And as I think I indicated previously, I am not really sure why I don't like the notion of re-queued events. But they do make me nervous, so I try to find ways to avoid them. B-) Strange, but that was my first gut response also. The added process to achieve a smaller memory footprint is questionable. That is supposed to be a design choice not an analysis choice. I'll let that slide as a specific kludge for a specific project. However, after I started thinking about it from the standpoint of changing the rules for the STT to accommodate three unique complimentary responses, it became palatable. Monospaced font: Consume Transition Exception TRANSITION true true false IGNORED true false false CAN"T HAPPEN true false true --- HOLD false false false HOLD w/ACTION false true false --- 1 ??? true true true 2 ??? false true true 3 ??? false false true The first three STT entries have always exhibited these behaviors. The Consume behavior was not needed since it was always true. However, if we add Hold, the consume behavior has to be added. The other possibilities may have some usefulness in exception handling, however, right now I'm not prepared to go there. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Dana Simonson Section Manager, Engineering Waseca Operations Center E. F. Johnson Company dsimonson@efjohnson.com www.efjohnson.com Subject: Re: (SMU) Hold event after transistion lschneid@eng.delcoelect.com (Lee Riemenschneider) writes to shlaer-mellor-users: -------------------------------------------------------------------- "Dana Simonson" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to lahman... >> It does raise an interesting question, though. Is an event generated in >>one subtype and received by a migrated subtype of the same instance a self- >>directed event? That is, can one count on neither B nor C receiving an >>external event until the En' is processed? I would certainly hope so since >>, after all, it is the same instance viewed from the supertype. > >It is in our translator. :-) > >>Somebody has to generate the original E30-E50. That somebody could >>generate E29 and then the appropriate E30/E50 event in the >>same action. The E29 would cause the migration and then the E30-E50 would >>be C's events that would DTRT. So long as E29 and E30/E50 were directed at >>the supertype they would have to be processed in the correct order (i.e., >>the order generated) since the same two instances are involved. This would >>eliminate the re-queuing AND the extra states. > >The problem arises in that E30-E50 are external events from another domain. >They can occur when A is either B or C (or some other subtype). C always >does the same action in response to a given event. If A is currently B, >then and only then does the migration from B->C occur. Since the sender >can not know what state A is in when it receives the event (nor should it >ever know even when it sends the event) it cannot send the migrate event. > >[Unless we went to the extreme where there were n^2 specific migrate events >that moved the object from each subtype to every other subtype, and we >created another object to receive the external events and know what subtype >could process each event and then sent n migrate events to force the >migration to the particular subtype who could process the real event and >then sent the event. All subtypes would then have to accept and ignore all >the migrate events directed at another subtype (which should be automatic >with polymorphic events). But, if the purpose is shrink the STT then this >is not a good solution.] > >So, given that > >- an event can be received from an external domain when in any subtype >- the event must be processed by a specific subtype >- the event cannot be lost or ignored > >I still think the only choices are >1) N extra states in each subtype to migrate and send an event on to the >correct subtype >2) HOLD the event but go ahead and transition and process a state to do the >migrate > I would think that you could have a model where C is created at the same time as B. C would sit in it's creation state, until B enters it's final state, at which time the action of B's final state would generate an event to C that would cause C to enter a state where it can accept events. Since events are never lost, C would then go through it's state machine via E30-E50. This alleviates the need for the hold (at least as far as the above description is concerned. :-) ). =========================================================================== Lee W. Riemenschneider lee.w.riemenschneider@delphiauto.com Software Engineer OS/2 user PURDUE alumnus and die-hard FAN! =========================================================================== Subject: RE: (SMU) ESP Article (Kennedy-Carter's HOLD in State Transition "Lynch, Chris D. SDX" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lahman: ----------------------------- >What I missed was the notion that you were creating object instances instead of the >events. > >> This is consistent with the PT injunction that events never >> have priority (except those self-directed) >> and the PT guarantee that events passing >> between two instances arrive in the order of transmission. >> What I don't like about the HOLD idea is that it >> puts the onus on the analyst to maintain the guarantee of >> identical ordering. > >I would think that whether one uses HOLD or instacnes, all bets are off concerning >the rules of ordering between instances. Creating an object instance is really >just begging the point. From the sender's viewpoint an event it wants to send will >be processed out of order -- you have just forced the sender to understand this, >which I don't like because... >From the sender's viewpoint the event it sends via instance CAN be processed out of order, but need not be. The way I use this, the sender does not care what the receiver does with it nor when. (Main point! --> )This subsumes HOLD semantics, and has the advantage (in my way of thinking) of exposing the discontinuity between the nature of the instantiated event and the receiving object's lifecycle as expressed by its state-transition graph. Another advantage is the retention of the simple (non-prioritized) event-dispatch mechanisms. >This is a much more complicated mechanism than I originally pictured that invades >the IM. "Invades" is such a harsh word! Is it invading the IM if PT put it there? :-) I can sympathize with a desire to keep things off the IM unless they are demonstrably necessary. But I like the "instantiated" event on the IM because it says to me, "here is some unusual information". There are many reasons to do this, but one good reason is that the event may be totally asynchronous to the lifecycle of the object assigned to handle it and of a different nature that those which come in via the standard event mechanism. This is what I suspect was the case in the telecomm example which motivated the introduction of HOLD in the KC product. My personal preference is to highlight this last reason on the IM. (And no, I don't think this creates an unreasonable level of coupling between objects. As Mr. Rochford says, objects which interact via some protocol need some knowledge of each other, as expressed by the protocol.) My motivation to join this thread was to disagree with the notion that event priorities (in the SM formalism) are a necessary addition to SMOOA. I contend that it puts the method on the slippery slope to arbitrary event priorities and a loss of a fundamental property of the method (guaranteed event ordering between instances.) On a meta-method note, if I find myself chafing under the limitations of a particular method (such as SMOOA), I use another method. This helps to keep me out of the "misfit method" syndrome. -Chris --------------------------------------------------- Chris Lynch Abbott AIS San Diego CA lynchcd@hpd.abbott.com "If you're as clever as you can be when you design it, how will you ever debug it?" Kernighan and Plauger --------------------------------------------------- Subject: RE: (SMU) Anonymous event notification David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- > > When a state model sends itself an event, then it knows > > which transition will be used to move out of the current > > state. That guarantee is the reason for self-directed > > events. A maintainer would want to be aware of this > > distinction from the state model. > > My point is that it only knows what transition to use when it > _receives_ the event. The queue manager is the one handling > the priorities and that is outside the instance state model. The the reason for giving priority to the self generated event is so that the state model knows, when it _generates_ the event, which transition will be taken. > Put another way, if the event generator does not specify the > destination, then there is nothing in the state model to > indicate the event is directed at itself. Which is why, for self-directed events, the generator _must_ indicate that the event is self-directed. A secondary advantage of using a distinct notation for self-directed events is that it becomes possible for events that are (coincidentaly) routed back to the sender are not given increased priority. > The maintainer would be aware of this fact when examining the > notation (external to the state model) that defined the > navigation. But the behaviour (and existance) of self-directed events is local to the object. It would, IMO, be wrong to move it to an external notation. That would invite confusion. I can almost picture the scene: someone stares at a model for an hour, and then slaps their forehead: "Oh! now I see! It works because that event is self-directed. Duh". Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: RE: (SMU) Anonymous event notification "Lynch, Chris D. SDX" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp: --------------------------- >> I think PT's discussion of OCM layering for the >> juice plant illustrates the utility of instance-directed >> events very nicely. Perhaps I overreach when >> I say it's essential, but it does seem very natural >> to do so. > >Strange, I would say that it nicely illustrates the >benefits of associating the routing with relationships. Maybe we are talking about different things. I was not interested so much in the method of routing as I was in asserting that the routing was necessary, as opposed to the idea of the events being truly "anonymous", i.e., broadcast. But your idea of routing events along relationships is interesting too, so... >For example, there are 3 events between "Batch" and >"Juice Transfer". Which instances do these go between? >I would guess those instances are related by R3, but >I'd have to check the state models to be sure. >Wouldn't it ne nice if I could see that information on >the OCM (or OIM). Rather than having to excavate this from the state models proper, I would put this in text on the relationship description of R3. Additionally, I have found textual overviews of the OCM to be vital. Such instance-information can go there, if it is important to you. Both methods are low-tech, but effective. I would also be concerned about anything which might add to my already busy OCM's. >JT1 is especially interesting because R3 is 1:M. There >is no relationship that tells us which juice transfer >is to be done. Should such a relationship be added? >My personal feeling is that, without it, there is >some information missing from the information model. Text counts as OIM, too! I have been burned by trying to put too much into relationships on the IM. It turned out that one creative analyst could find an "interesting" relationship between any two objects in the domain! Maybe I'm overreacting but I am definitely in the minimalist camp wrt to the graphical OIM. >But when you do add it, you'll notice that Batch >is gaining relationships because its a manager- >object: perhaps some reorganisation is needed. In general, there is also the probability that in adding such relationships some loops will be created, which may decrease the value of the additional information by making the diagram harder to read. (BTW, thanks for doing the actual work of looking for the actual example. :-) ) -Chris --------------------------------------------------- Chris Lynch Abbott AIS San Diego CA lynchcd@hpd.abbott.com "If you're as clever as you can be when you design it, how will you ever debug it?" Kernighan and Plauger --------------------------------------------------- Subject: Re: (SMU) Anonymous event notification lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Riemenschneider... > OK. I can see that even if the events were transmitted simultaneously, they > wouldn't get processed simultaneously. This is very real world. :-) > > So, what benefit is gained by having an anonymous event message rather than a > smaller action statement in the originating state machine? My disenchantment with associating a destination with an event at the event generator is purely aesthetic at the moment. State machines are not supposed to understand context, and that strikes me as a context that should not be explicit in an action. Coming up with a clear example of why such an aesthetic is important in practice is a whole other problem. B-) I think this context issue is similar, albeit tenuously, to the common debates about why defining and using accessor methods in a language like C++ for trivial, rigidly defined attributes is a Good Thing. One knows it is breaking the rules of encapsulation to access the attribute directly, but people keep coming up with examples where it seems silly. Those relatively rare but painful situations where one gets burned by the need to change an attribute that everybody previously thought was immutable tend to get lost in the refuse of decaying synapses. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Hold event after transistion lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Riemenschneider... > lHad to change the line lengths, as it was really getting ugly. :-) Curious. I have Netscape set to write 60 character lines. Or was it the quotes that was getting gnarled? > Ummm. You can't have a migrated subtype of the same instance, can you? > (OL:MWS pg. 46) Although the paragraph under "Leave subtypes and supertypes > consistent" is vague on migration, it does say that if you delete an instance > of a subtype, you must delete the corresponding instance of the supertype. Funny you should mention that. My copy of OL:MWS has a flock of question marks in the margin of that paragraph. Though that was about the implication that there were separate instances of supertypes. Our tool instantiates supertypes to some degree, but I don't care for that. Like Highlanders, there can be only one. But I digress... When I do subtype migration I always do it synchronously so that when the action completes there is still only one instance with the same identifiers and things are consistent. Philosophically I regard regard it as the same instance and the transformation occurs within the atomic unit of time (i.e., an action). This seems reasonable given that the values of the identifiers will be the same. > Since the subtype is a supertype and the supertype is a subtype, I would > think that you couldn't actually migrate an instance of subtype with the same > instance of supertype. I'm afraid I don't follow this. [My inability to keep up is a failing my wife has already pointed out to me on numerous occasions.] When does the sub-/super- get reversed during migration? > I also have a problem with the concept of events pending to an object that > doesn't exist yet. Since the reason was stated as memory constraints, it is > domain pollution to try to work around this in the application model. It may > require some fancy footwork in the architecture, but I don't see why this > should affect the application analysis. It is one thing to use HOLD to delay processing of an event to an _existing_ state machine. It is a very different thing to delay processing of an event because the target instance doesn't exist yet. HOLD may bend the rules (on event order) but sending events to nonexistent instances strikes me as anarchy. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Hold event after transistion lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Simonson... > >Somebody has to generate the original E30-E50. That somebody could generate E29 and then the appropriate E30/E50 event in the same action. The E29 would cause the migration and then the E30-E50 would be C's events that would DTRT. So long as E29 and E30/E50 were directed at the supertype they would have to be processed in the correct order (i.e., the order generated) since the same two instances are involved. This would eliminate the re-queuing AND the extra states. > > The problem arises in that E30-E50 are external events from another domain. Ah, another problem space detail rises to the surface. However, I don't think this is a major obstacle. The bridge itself can place the E29 on the queue before the En. If the instance is a C, then E29 is IGNOREd and if the instance is a B, E29 causes migration. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: RE: (SMU) Anonymous event notification David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- "Lynch, Chris D. SDX" wrote: > Maybe we are talking about different things. > I was not interested so much in the method > of routing as I was in asserting that the routing > was necessary, as opposed to the idea of the > events being truly "anonymous", i.e., broadcast. Yes, I agree that routing is necessary. It is only the mechanism that I've been talking about. > I would put this in text on the relationship description > of R3. > [...] > I would also be concerned about anything which might > add to my already busy OCM's. If we can say that each line on the OCM is associated with exactly one relationship-chain, then you aren't really adding much additional clutter to the OCM. If you were using a tool, then you might have an option to turn off teh display of that text. I agree that text descriptions are necessary, but I prefer to use them for explanations and overviews. Information that effects simulation (e.g. formula to (M) attributes and routing of events) should be stated formally, as part of the model. This simplifies the ADFDs, because they then don't have to repeat the textual description as an algorithmic process. > I have been burned by trying to put too much into > relationships on the IM. Even if you do associate events with relationships, I don't think you need to show it on the OIM. The important thing is that the relationship definitions include the information. > Maybe I'm overreacting but I am > definitely in the minimalist camp wrt to the > graphical OIM. I would see it more as a way of helping the analyst to see the essential relationships. If either data or control is passed between objects along a relationship, then that relationship is meaningful. If no relationship exists for data/control, then it probably should. As I said previously, the OCM is a great tool for discovering missing objects (before you write the state models). I thing the the event-relationship association would increase the power of this tool. > In general, there is also the probability that in > adding such relationships some loops will be created, > which may decrease the value of the additional > information by making the diagram harder to read. Anyone can create a bad model. When such loops are added, they can be investigated. My emphesis is to add work at the OIM/OCM/OAM stage to trivialize the construction of state models. Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: Re: (SMU) ESP Article (Kennedy-Carter's HOLD in State Transition lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lynch... > >From the sender's viewpoint the event it sends via instance CAN be > processed out of order, but need not be. The way I use this, the > sender does not care what the receiver does with it nor when. Perhaps I misunderstood, but I thought the way this worked was that the sender defines the instance in lieu of generating an event. I would argue that depends upon the context of the sender, even if it is sometimes processed as if it were a normal event. The sender knows that the receiver *might* not be able to accept the event. Just to clarify, what does 'sends via instance' mean? Does it mean that at the action level there is an event generator process as usual that is somehow colorized to produce an instance if the receiver is blocked or an event otherwise? Of does it create an instance in lieu of an event generator? Or are all events instantiated in the translation? > (Main point! --> )This subsumes HOLD semantics, and has the advantage > (in my way of thinking) of exposing the discontinuity between > the nature of the instantiated event and the receiving object's > lifecycle as expressed by its state-transition graph. I have no doubt of this. My concern is simply with replacing one problem (event disorder) with another (sender carnal knowledge of the receiver). > I can sympathize with a desire to keep things off the IM unless they > are demonstrably necessary. But I like the "instantiated" event > on the IM because it says to me, "here is some unusual information". I would argue that HOLD is pretty unusual information as well. > There are many reasons to do this, but one good reason > is that the event may be totally asynchronous to the lifecycle > of the object assigned to handle it and of a different nature > that those which come in via the standard event mechanism. > This is what I suspect was the case in the telecomm example > which motivated the introduction of HOLD in the KC product. I don't think HOLD works for the truly asynchronous situation. It is actually a pretty specialized way of dealing with the problem to my way of thinking. I agree that yours is more general. > My motivation to join this thread was to disagree with the notion > that event priorities (in the SM formalism) are a necessary addition > to SMOOA. I contend that it puts the method on the slippery slope > to arbitrary event priorities and a loss of a fundamental property > of the method (guaranteed event ordering between instances.) Yes. In the past couple of weeks there have been more suggestions for modifying the notation than in the prior couple of years. It's one of the fun things about going off on tangents. Moreover, I don't think any of the proposals have demonstrated that they are absolutely necessary. Even HOLD is more convenient than necessary; the models might be big & ugly, but I think one could live without it. > On a meta-method note, if I find myself chafing under the limitations > of a particular method (such as SMOOA), I use another > method. This helps to keep me out of the "misfit method" syndrome. The other side of the coin is that I haven't seen any situations where I couldn't model what I wanted to model. That might be because our applications don't push the envelope very much (we have only two domains that actually have asynchronous events). Or it might be that the notation is sufficient. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Anonymous event notification lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > > My point is that it only knows what transition to use when it > > _receives_ the event. The queue manager is the one handling > > the priorities and that is outside the instance state model. > > The the reason for giving priority to the self generated event > is so that the state model knows, when it _generates_ the event, > which transition will be taken. I can't agree with that notion. It violates the FSM rule that a state in a model cannot know its prior or next state. When the analyst decides that a particular transition event should be generated within the state model rather than externally, the analyst immediately has a combinatorial problem in dealing with all possible interactions with externally generated events. The solution for making this problem more tractable was to give priority to self-directed events. It simply makes the analyst's problem (of deciding where transition events should be generated) more manageable. Thus I see the rule that prioritizes self-directed events to be a rule that applies at the OCM level rather than the state model level. A particular state model should have no clue that some events will have priority over others. > > Put another way, if the event generator does not specify the > > destination, then there is nothing in the state model to > > indicate the event is directed at itself. > > Which is why, for self-directed events, the generator _must_ > indicate that the event is self-directed. Fascinating, but clearly related to the idea that the state model knows its context. In my view having the generator know that the vent is self directed would defeat the purpose of removing the destination. > But the behaviour (and existance) of self-directed events is > local to the object. It would, IMO, be wrong to move it to > an external notation. That would invite confusion. I can almost > picture the scene: someone stares at a model for an hour, and > then slaps their forehead: "Oh! now I see! It works because > that event is self-directed. Duh". And you've never been in a situation where the same event is generated in two different actions, perhaps in two different state models, and you didn't understand how things could possibly work until you noticed the second event generator? You resolve those sorts of problems by looking at the right diagram (e.g., the OCM). So if you want to know which events are self-directed, you look at he right diagram. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: RE: (SMU) ESP Article (Kennedy-Carter's HOLD in State Transition David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- > (Main point! --> )This subsumes HOLD semantics, and has the advantage > (in my way of thinking) of exposing the discontinuity between > the nature of the instantiated event and the receiving object's > lifecycle as expressed by its state-transition graph. This seems to be an example of an asynchronous producer-consumer pair -- so an assigner would seem appropriate. Dave -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: RE: (SMU) ESP Article (Kennedy-Carter's HOLD in State Transition "Lynch, Chris D. SDX" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lahman: > > >From the sender's viewpoint the event it sends via instance CAN be >> processed out of order, but need not be. The way I use this, the >> sender does not care what the receiver does with it nor when. > >Perhaps I misunderstood, but I thought the way this worked was that the sender >defines the instance in lieu of generating an event. I would argue that depends >upon the context of the sender, even if it is sometimes processed as if it were a >normal event. The sender knows that the receiver *might* not be able to accept the >event. > >Just to clarify, what does 'sends via instance' mean? Does it mean that at the >action level there is an event generator process as usual that is somehow colorized >to produce an instance if the receiver is blocked or an event otherwise? Of does >it create an instance in lieu of an event generator? Or are all events >instantiated in the translation? My apologies. I should have just pointed to OL:MWS pp. 47-49, particularly the state model at the bottom of p.49 -- that's what I meant, except that the example shows both the generation of a normal event (M4) and the creation of an instance of Product Transfer. In my example I left out the normal event; it could be included or not depending on whether the receiver needs a "kick" to wake up and smell the instance. (Or, if you don't care for that level of coupling, always send the event both ways.) No fancy architecture is required. -Chris --------------------------------------------------- Chris Lynch Abbott AIS San Diego CA lynchcd@hpd.abbott.com "If you're as clever as you can be when you design it, how will you ever debug it?" Kernighan and Plauger --------------------------------------------------- Subject: RE: (SMU) Anonymous event notification David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > Responding to Whipp... > > The the reason for giving priority to the self generated event > > is so that the state model knows, when it _generates_ the event, > > which transition will be taken. > > I can't agree with that notion. It violates the FSM rule that a state > in a model cannot know its prior or next state. Yes, it does. OOA96 says: "In OOA91, the only rule regulating the order in which events are received applied to events transmitted between a single sender/receiver pair. In all other cases OOA91 made no assumptions, and the analyst was directed to ensure that the state models operated properly regardless of the order in which events were received. However, in some cases this policy required the analyst to provide additional logic that had no real value in the domain under consideration. To remedy this problem, OOA96 imposes an additional rule..." In other words, the rule was introduced to allow a state model to be simplified by using assumptions about the next state (Actually, the assumption is in the ordering of events. But if you know which event you'll receive next, then you can deduce the next state). It is my belief that those assumptions should be made explicit within the state model itself, and should not be visible outside the state model. The rule is meaningless outside of a state model because no other part of the model is effected. The OCM and OAM show interactions between objects: self directed events do not interact! [Actually, as I've said before, I don't really like the rule. It feels like a hack that papers over weaknesses in the state model formalism] Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: RE: (SMU) ESP Article (Kennedy-Carter's HOLD in State Transition "Lynch, Chris D. SDX" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp -------------------------- >> (Main point! --> )This subsumes HOLD semantics, and has the advantage >> (in my way of thinking) of exposing the discontinuity between >> the nature of the instantiated event and the receiving object's >> lifecycle as expressed by its state-transition graph. > >This seems to be an example of an asynchronous producer-consumer >pair -- so an assigner would seem appropriate. As one reasonably accurate depiction of the mechanism and one purpose of the stored event, I referred Mr. Lahman to OL:MWS, p. 49, bottom diagram. I can see that situation as a producer-consumer scenario (tanks are being produced and consumed), but I don't know about the general case. Maybe a counterexample will help. Another situation where I used the stored event idea before was in handling exception conditions in a control system. Because the system was safety critical, we had to account for every kind of abnormal or malfunctioning hardware. In the initial cut, the various control information flowed down the OCM, and the responses (i.e., MotionComplete) bubbled back up, with everything done with normal SMOOA events. This much was reasonably easy. When we added the failure events, however, we found that the objects (particularly mid-level "coordinators") were sometimes not in a state where they could do anything about the event, but needed to wait until they had finished something else (e.g. reconfigured some valves or responded to a user request). All efforts to integrate these exceptions to the normal response events proved very unwieldy. That's when we "discovered" the idea that these low-level objects which might stub their toes on the hardware could (a) instantiate an "exception report" object and (b) send an event (Enn: ExceptionOccurred (ExceptionId)) to various active objects in the collaborating group of objects. (c) if possible, try to finish what they were doing, otherwise make themselves ready to perform a different operation. (e.g. a Retry/Recover from a higher-level object). The upper level, controlling objects could deal with these problems according to their severity, number, and mode of the system, etc. This turned out to be much more manageable than the other alternatives, since it allowed us to defer some of the decisions about error-handling until we had gained experience with the mechanical hardware. (Of course, at the time I didn't have the benefit of ESMUG to bail me out, so maybe I missed a better solution. :-) ) -Chris --------------------------------------------------- Chris Lynch Abbott AIS San Diego CA lynchcd@hpd.abbott.com "If you're as clever as you can be when you design it, how will you ever debug it?" Kernighan and Plauger --------------------------------------------------- Subject: RE: (SMU) ESP Article (Kennedy-Carter's HOLD in State Transition David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- "Lynch, Chris D. SDX" wrote > > This seems to be an example of an asynchronous producer- > > consumer pair -- so an assigner would seem appropriate. > > ... OL:MWS, p. 49, bottom diagram. I can see that situation > as a producer-consumer scenario (tanks are being produced > and consumed), but I don't know about the general case. > > Maybe a counterexample will help. > > [...] > > That's when we "discovered" the idea that > these low-level objects which might stub their toes > on the hardware could > (a) instantiate an "exception report" object and (OL:MWS names these as "incident objects" -- P13) > (b) send an event (Enn: ExceptionOccurred (ExceptionId)) > to various active objects in the collaborating group of > objects. You can still view the low level object as producers (of exception reports), and the high level objects as consumers. You still need to coordinate the various groups of objects. A mixture of incident objects and assigners can probably clean up most situations of the type you describe. I see the techniques as complementary, not mutually exclusive. > [...] Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: Re: (SMU) Anonymous event notification "David Harris" writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > lahman writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Responding to Whipp... > > > The problem with broadcasting is that the receiver must know > > who to listen for. All that you do is moved the coupling > > from the sender to the receiver. The event would have to > > carry the identity of the sender, and the receiver will > > act as an observer on specific sender-ids. I don't see any > > benefit. > > Why would the receiver need to know the identity of the sender? What I > would like to see is that instance A1 broadcasts event E397. Meanwhile > instance W14 accepts event E29. So far both state machines can be > designed independently. Now the Analyst steps back and thinks. "To > solve my overall problem W14 is going to have to get E29 from > somewhere. Sonofagun, I just happen to have A1 generating E397 under > exactly the right conditions. I'll just route E397 to A1 as E29 right > here in my new Event Mapping Table. And I'll let that Architect, who > already has too much time on his hands, figure out how to map the data > in volts on E397 to millivolts on E29." In effect one is just using a > new notation at the OCM level as a bridge to route events and map data > packets between objects. > My problem with this is, why did A1 generate E397 and why did it choose to add the data that it did? A receiving event knows what event it must know about and the data it that must arrive with that data. In the same way that if an event is to be generated, even if the destination is not known, the meaning of the event - in the context of the instance generating the event - and the data to be carried must be known. This would imply that the state machine can only be produced in conjunction with the OCM, this ensures that a state machine only generates an event if and only if it is required to meet the object's interface as defined by the OCM. Note that this also implies that if the OCM changes insofar as an object's interface changes by producing more or less events the state machine must also change. The question would then be should the analyst look along the path of the event on the OCM and identify the destination for the event. If the analyst does not, but allows the architecture to sort out the destination, then the state machine will be immune to OCM changes that affect only the object that receives the event. However a mechanism would then be required to identify that the instance of event E1 generated by A1 was for B1 and not B2. Currently this is performed within the state, it could be argued that this is hiding the rules of your model. The use of a mechanism as suggested by Whipp could allow the identification of the instances that communicate by event. Thus the state machines need not know where the events generated are going and this information would be available earlier in the domain development cycle and modelled in a more visible fashion. Dave Harris Sorry if I have restated anything anyone else as stated but I have been away for a few days. Subject: Re: (SMU) Hold event after transistion lschneid@eng.delcoelect.com (Lee Riemenschneider) writes to shlaer-mellor-users: -------------------------------------------------------------------- >lahman writes to shlaer-mellor-users: >-------------------------------------------------------------------- >Responding to Riemenschneider... >> lHad to change the line lengths, as it was really getting ugly. :-) > >Curious. I have Netscape set to write 60 character lines. Or was it the quotes >that was getting gnarled? > Really curious. This message has lines > 80 characters. It was both the quoted text and the new text that was long. Spooky. :-) >> Ummm. You can't have a migrated subtype of the same instance, can you? >> (OL:MWS pg. 46) Although the paragraph under "Leave subtypes and supertypes >> consistent" is vague on migration, it does say that if you delete an instance >> of a subtype, you must delete the corresponding instance of the supertype. > >Funny you should mention that. My copy of OL:MWS has a flock of question marks in >the margin of that paragraph. Though that was about the implication that there were >separate instances of supertypes. Our tool instantiates supertypes to some degree, >but I don't care for that. Like Highlanders, there can be only one. But I >digress... > I disagree about there only being one (instance that is). I've also noticed that OL:MWS pg. 57 seems to conflict with what I've stated above. Under "Subtype Migration" it suggests that an instance of a supertype can migrate between subtypes, so if you migrated, and deleted the previous subtype instance, according to OL:MWS pg. 46 you would have to delete the supertype instance. >> Since the subtype is a supertype and the supertype is a subtype, I would >> think that you couldn't actually migrate an instance of subtype with the same >> instance of supertype. > >I'm afraid I don't follow this. [My inability to keep up is a failing my wife has >already pointed out to me on numerous occasions.] When does the sub-/super- get >reversed during migration? > Wives (Spouses (to be PC)) do have ways of pointing out our failings. ;-) I probably worded it badly, but the point that I was trying to make was not one of reversal. I was trying to say that the instance of the object is both the subtype and the supertype. This is very clearly stated in the literature (the only inconsistancy being OL:MWS pg. 57 ;-) ), so if you have multiple instances of subtypes, you have to have multiple instances of the supertype. Supertypes are just another object, so why couldn't you have multiple instances? I think the situation where you migrate to another subtype and finish with the previous subtype is best described by following the rules on OL:MWS pg. 46. (Although, it could have been stated more fully by also saying the reverse of the two rules in the paragraph. i.e., if you delete the supertype instance, you have to delete the corresponding subtype instance; and if you create a subtype instance, you have to create a corresponding supertype instance.) =========================================================================== Lee W. Riemenschneider lee.w.riemenschneider@delphiauto.com Software Engineer OS/2 user PURDUE alumnus and die-hard FAN! =========================================================================== Subject: Re: (SMU) Anonymous event notification "David Harris" writes to shlaer-mellor-users: -------------------------------------------------------------------- Gregory Rochford wrote SNIP > Ah, engineers, always changing things. :) > > I think I understand what you want to do. (At least > as far as using relationship instances to direct events) > My first thought is: > > What is the cost/benefit of maintaining relationships > with all instances that event communication occurs with vs. > finding the identifier of the destination each time > an event needs to be sent? If an instance has to send another instance an event is this not a rule of the subject matter of the domain. As such one would expect to see a relationship on the OIM which represents this rule. > > > If an instance is typically in a relationship with the > destination instance, then this could be a win. > > And on the negative side: > > How many relationships are added to the OIM just for > event communication? If a large number of relationships have to be added is this not an indication that the objects in the domain may be to tightly coupled? > > > How much additional benefit is there in decoupling > the state models in a domain? Do people really > reuse objects at the domain level? Remember that reuse comes in two forms, lifting an object or domain and using it exactly as it is and extending or altering as functionality changes or bugs are found. The less coupling between objects the easier it is to cope with small ( or not so small ) changes as the encapsulation will hold at a lower level. Dave Harris Subject: Re: (SMU) Hold event after transistion "Dana Simonson" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to lahman... >Ah, another problem space detail rises to the surface. However, I don't think this is a major obstacle. The bridge itself can place the E29 on the queue before the En. If the instance is a C, then E29 is IGNOREd and if the instance is a B, E29 causes migration. Not in my universe (or toolset as the case may be), our bridges are 'analyzed' in SDFDs. No magic allowed. [Bridging is done via wormholes. Each wormhole has a corresponding receive SDFD in the other domain's interface bridge.] The N 'states' to receive and resend the events just move into the bridge SDFDs. The Client has a wormhole to a server domain with an async return of an event. So far we have a bridge SDFD for the server domain. To preprocess an event on receipt would require a bridge SDFD on the client domain as well. What you have proposed, I think, is that the bridge be allowed to have a single SDFD which does some processing and then resends whatever event caused it execute. This is again the same thing that was proposed as 'HOLD with Action' except that it is limited to bridges. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Dana Simonson Section Manager, Engineering Waseca Operations Center E. F. Johnson Company dsimonson@efjohnson.com www.efjohnson.com Subject: Re: (SMU) Hold event after transistion "Dana Simonson" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Riemenschneider... > I disagree about there only being one (instance that is). I've also noticed > that OL:MWS pg. 57 seems to conflict with what I've stated above. Under > "Subtype Migration" it suggests that an instance of a supertype can migrate > between subtypes, so if you migrated, and deleted the previous subtype > instance, according to OL:MWS pg. 46 you would have to delete the supertype > instance. > I probably worded it badly, but the point that I was trying to make was not > one of reversal. I was trying to say that the instance of the object is both > the subtype and the supertype. This is very clearly stated in the literature > (the only inconsistancy being OL:MWS pg. 57 ;-) ), so if you have multiple > instances of subtypes, you have to have multiple instances of the supertype. > Supertypes are just another object, so why couldn't you have multiple > instances? > I think the situation where you migrate to another subtype and finish with the > previous subtype is best described by following the rules on OL:MWS pg. 46. > (Although, it could have been stated more fully by also saying the reverse of > the two rules in the paragraph. i.e., if you delete the supertype instance, you > have to delete the corresponding subtype instance; and if you create a subtype > instance, you have to create a corresponding supertype instance.) Think of the ODMS example. A Disk can be either an Online Disk or an Offline Disk. It can also migrate back and forth between the two. When a disk migrates, it still remains a disk. The disk is NEVER 'deleted'. All of the supertype attributes of the disk, including the identifer, remain. How is this acomplished? Within the context of a single action, the disk may be deleted and recreated, but to the rest of the analysis, that disk is always there. If you send multiple events to the disk I would think the sender would expect them to be delivered regardless of whether the object migrates. I can see valid arguments on both sides of this issue, so more important than the outcome is the definition of the rules. 1) When an object instance is deleted, all pending events to that instance are deleted. 2) When an object migrates, all events destined to the subtype of the instance get deleted. But, do polymorphic events destined to that instance also get deleted, or are they sent to the new subtype? >Software Engineer OS/2 user PURDUE alumnus and die-hard FAN! Didn't IBM discontinue sales & support of OS/2? (I from '91-'96. IBM seemed to be trying to kill it then.) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Dana Simonson Section Manager, Engineering Waseca Operations Center E. F. Johnson Company dsimonson@efjohnson.com www.efjohnson.com Subject: RE: (SMU) ESP Article (Kennedy-Carter's HOLD in State Transition "Lynch, Chris D. SDX" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp: --------------------------- >You can still view the low level object as producers >(of exception reports), and the high level objects as >consumers. You still need to coordinate the various >groups of objects. > Minor definition quibble: I think of the producer-consumer situation as being a case where the consumer cannot proceed without the item provided by the producer, which is not true in the case of my incident objects-- the consumer can do without them just fine. >A mixture of incident objects and assigners can probably >clean up most situations of the type you describe. I see >the techniques as complementary, not mutually exclusive. How would the assigner be employed? -Chris --------------------------------------------------- Chris Lynch Abbott AIS San Diego CA lynchcd@hpd.abbott.com "If you're as clever as you can be when you design it, how will you ever debug it?" Kernighan and Plauger --------------------------------------------------- Subject: Re: (SMU) Hold event after transistion Gregory Rochford writes to shlaer-mellor-users: -------------------------------------------------------------------- Dana Simonson wrote: > > "Dana Simonson" writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Responding to Riemenschneider... > > > I disagree about there only being one (instance that is). I've also noticed > > that OL:MWS pg. 57 seems to conflict with what I've stated above. Under > > "Subtype Migration" it suggests that an instance of a supertype can migrate > > between subtypes, so if you migrated, and deleted the previous subtype > > instance, according to OL:MWS pg. 46 you would have to delete the supertype > > instance. > > > I probably worded it badly, but the point that I was trying to make was not > > one of reversal. I was trying to say that the instance of the object is both > > the subtype and the supertype. This is very clearly stated in the literature > > (the only inconsistancy being OL:MWS pg. 57 ;-) ), so if you have multiple > > instances of subtypes, you have to have multiple instances of the supertype. > > Supertypes are just another object, so why couldn't you have multiple > > instances? > > I think the situation where you migrate to another subtype and finish with the > > previous subtype is best described by following the rules on OL:MWS pg. 46. > > (Although, it could have been stated more fully by also saying the reverse of > > the two rules in the paragraph. i.e., if you delete the supertype instance, you > > have to delete the corresponding subtype instance; and if you create a subtype > > instance, you have to create a corresponding supertype instance.) > > Think of the ODMS example. A Disk can be either an Online Disk or an Offline Disk. It can also migrate back and forth between the two. When a disk migrates, it still remains a disk. The disk is NEVER 'deleted'. All of the supertype attributes of the disk, including the identifer, remain. How is this acomplished? Within the context of a single action, the disk may be deleted and recreated, but to the rest of the analysis, that disk is always there. If you send multiple events to the disk I would think the sender would expect them to be delivered regardless of whether the object migrates. > > I can see valid arguments on both sides of this issue, so more important than the outcome is the definition of the rules. > > 1) When an object instance is deleted, all pending events to that instance are deleted. > 2) When an object migrates, all events destined to the subtype of the instance get deleted. But, do polymorphic events destined to that instance also get deleted, or are they sent to the new subtype? > When I'm confused, I return to the fundamentals. Events are directed to an instance. Not to a subtype or supertype, but to the instance. The state machine for the instance can be spread out over the subtype/supertype hierarchy, but it's still one (big) state machine. If an _instance_ with pending events is deleted, I would consider that an analysis error. An event sent to an instance whose state model doesn't have a transition for the event is either a can't happen or event ignored. (Also, I don't assume events are deleted when an instance is deleted) So in answer to 2) If the event directed to an instance is not currently allowed, because the event is for a subtype that the instance has migrated from, then I would call that a can't happen. The resolution for polymorphic events to subtype specific events occurs when the event is consumed by the state machine, so the event is processed as normal. best gr Subject: RE: (SMU) Hold event after transistion "Lynch, Chris D. SDX" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Simonson: --------------------------------- > 1) When an object instance is deleted, all pending events to that instance are deleted. I don't know if I agree with this. For one thing, it seems to break the rule that events are never lost (i.e., always delivered). It also extremely impractical in some architectures (e.g., distributed systems). Can you provide a reference or a rationale? -Chris --------------------------------------------------- Chris Lynch Abbott AIS San Diego CA lynchcd@hpd.abbott.com "If you're as clever as you can be when you design it, how will you ever debug it?" Kernighan and Plauger --------------------------------------------------- Subject: Re: (SMU) ESP Article (Kennedy-Carter's HOLD in State Transition lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lynch... > My apologies. I should have just pointed to OL:MWS pp. 47-49, > particularly the > state model at the bottom of p.49 -- that's what I meant, > except that the example shows both the generation of a normal event (M4) and > the creation of an instance of Product Transfer. In my example I left out > the normal event; it could be included or not depending on whether the > receiver needs a "kick" to wake up and smell the instance. (Or, if you > don't > care for that level of coupling, always send the event both ways.) My fault. I completely forgot that discussion earlier in the thread. Seems to be happening a lot now with the onset of terminal senility. $nevermind -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Anonymous event notification lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > > > The the reason for giving priority to the self generated event > > > is so that the state model knows, when it _generates_ the event, > > > which transition will be taken. > > > However, in some cases this policy required the analyst to > provide additional logic that had no real value in the domain > under consideration. To remedy this problem, OOA96 imposes an > additional rule..." > > In other words, the rule was introduced to allow a state > model to be simplified by using assumptions about the > next state (Actually, the assumption is in the ordering > of events. But if you know which event you'll receive > next, then you can deduce the next state). This seems to be a glass half-full vs. glass half-empty disagreement. I don't see your assumption in that statement at all. I see a statement about the prioritization of events based upon state model origin/destination. There is nothing in there about knowing the next state. In my view the fact that the prioritization is at the model-to-model level raises it out of the specific state model to the OCM level. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: RE: (SMU) Hold event after transistion "Dana Simonson" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lynch, Chris D.: --------------------------------- >> 1) When an object instance is deleted, all pending events to that instance are deleted. > I don't know if I agree with this. > For one thing, it seems to break the rule that events are never lost > (i.e., always delivered). It also extremely impractical in some > architectures (e.g., distributed systems). > Can you provide a reference or a rationale? Given two objects, one sending an event to the other, the sender cannot know what state the receiver will be in when the event is delivered, nor what states may be processed in the time period between send and receive. Therefore, in the general case, the architecture (or analysis) MUST account for the possibility that an object will be deleted prior to the delivery of the event. The relationship used to get the destination of the event was valid on send of the event but was deleted prior to its delivery. In order to 'analyze this away', the receiver would have to be artificially coupled to the sender's life cycle to know when and how many events it might still receive when it wants to delete itself. In the architecture, the events can be deleted with the instance, or, the event delivery mechanism could discard (ignore) events whose destination is no longer valid. Since I don't believe objects should know who is sending them events, or when, this is a basic architecture issue. Example: 1) Buck sees doe. 2) Buck shakes horns and scrapes hooves on ground to get doe's attention. (event) 3) Doe gets shot. 4a) Since the doe cannot receive the event from the buck, the world application claims there is something wrong with the analysis of the deer, since it did not handle the event, throws a GPF and blue screens. -or- 4b) Since the doe cannot receive the event from the buck, the world application ignores the event. I prefer the latter, but Microsquish would probably disagree. ;-) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Dana Simonson Section Manager, Engineering Waseca Operations Center E. F. Johnson Company dsimonson@efjohnson.com www.efjohnson.com Subject: RE: (SMU) Anonymous event notification David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- > In my view the fact that the prioritization is at the > model-to-model level raises it out of the specific > state model to the OCM level. But your fact is not a fact. There are 2 rules: 1. the ordering of events between any 2 instances is always preserved. This is a model-to-model interaction, and has no priorities. 2. An event generated from a model to itself is _always_ received before any other event directed at the receiving instance. This is *not* a model-to-model interaction. There is no legal mechanism within SM by which any other object in the system can be effected by the presence, or absence, of a self-directed event. Rule (1) is not effected by rule (2). There is simply a prolonged period during which the instance does not receive any events. No other object can know whether this period is due to a long action of a single state, or a sequence of actions joined by self-directed events. The self-directed event is completely hidden from all other objects in the system. There is no model-to- model interaction. When an instance issues a self-directed event, it knows, with probability 1, what event it will receive next. Since it also knows which transition will be taken as a result of that event, it knows, with probability 1, what the next state will be. It is this knowledge, that all other transitions have probability 0, that allows the state model to be simplified. The self directed event guarentees that there will be no model-to-model interaction. Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: Re: (SMU) Hold event after transistion lschneid@eng.delcoelect.com (Lee Riemenschneider) writes to shlaer-mellor-users: -------------------------------------------------------------------- "Dana Simonson" writes to shlaer-mellor-users: -------------------------------------------------------------------- >Responding to Riemenschneider... >Think of the ODMS example. A Disk can be either an Online Disk or an Offline Disk. It can also migrate back and forth between the two. When a disk migrates, it still remains a disk. The disk is NEVER 'deleted'. All of the supertype attributes of the disk, including the identifer, remain. How is this acomplished? Within the context of a single action, the disk may be deleted and recreated, but to the rest of the analysis, that disk is always there. If you send multiple events to the disk I would think the sender would expect them to be delivered regardless of whether the object migrates. > >I can see valid arguments on both sides of this issue, so more important than the outcome is the definition of the rules. > >1) When an object instance is deleted, all pending events to that instance are deleted. >2) When an object migrates, all events destined to the subtype of the instance get deleted. But, do polymorphic events destined to that instance also get deleted, or are they sent to the new subtype? > I guess analysis is in the eyes of the beholder. :-) I can see your point, but I'm still not sure I like the original explanation. I'll continue to read and learn. :-) >>Software Engineer OS/2 user PURDUE alumnus and die-hard FAN! >Didn't IBM discontinue sales & support of OS/2? (I from '91-'96. IBM seemed to be trying to kill it then.) > IBM's still trying to kill it (only on the client, a new server version was just released), but it just won't die. :-) You can still buy it, and they still release fixpacks and enhancement for it. (They continue to release a version of Netscape (4.61), Java support has been good, and they just liscensed Scitech Display Doctor's video driver software.) They do that, but they publicly proclaim that the OS/2 client is dead. Go figure. IMHO, it's still better than the alternatives, so I'll keep using it until something better comes along. Nuff said on that issue! :-) =========================================================================== Lee W. Riemenschneider lee.w.riemenschneider@delphiauto.com Software Engineer OS/2 user PURDUE alumnus and die-hard FAN! =========================================================================== Subject: Re: (SMU) Anonymous event notification lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Harris... > My problem with this is, why did A1 generate E397 and why did it choose > to add the data that it did? A receiving event knows what event it must > know about and the data it that must arrive with that data. In the same > way that if an event is to be generated, even if the destination is not > known, the meaning of the event - in the context of the instance > generating the event - and the data to be carried must be known. Let me take a step back and amplify on my process to show how this might Just Work without a detailed OCM. The A1 and W14 state machines are developed independently. Initially this is done without regard to where the events come from (i.e., just the relevant transitions are identified and the functionality of the associated actions is outlined). Ideally one could then make a pass to determine where each existing transition event should be generated. Since the target model defines the data packet, that data would have to be available/computable wherever the event was generated. This would determine some of the detailed processing in the identified source action. If one has done a good job on the object abstractions there should be some place that seems appropriate for generating each event used somewhere in a transition and that action should have the appropriate data available for the data packet. You are correct that this doesn't always go as planned and a common sticking point is that the data for the data packet isn't conveniently available at the 'ideal' state to generate the event. One way around this is to start iterating over the state model design pass and the event generation pass. Given the preliminary emphasis on getting the abstractions (object and life cycle) correct, this would be done on a good foundation of encapsulation so it should converge relatively quickly. But it might not be terribly efficient. A compromise is to define the the data flows between objects at a high level when working on the OIM. Now the developers of A1 and W14 have some additional requirements when designing their models. Using an example I employed recently, the developer of a Test state model would know that the model is going to have to supply a pin state for each pin in a test pattern. Similarly the developer of the Channel state model would know that someone was going to supply a pin state. The Channel developer will make sure some transition event carries a pin state. Meanwhile the Test developer will make sure that some action will be able produce a pin state (probably extracting it from a Pattern object). More interestingly, the Test developer will pick the correct state to do this based upon information relevant to that state model (i.e., the notion of having loaded a test pattern is almost sure to be represented in a life cycle state). Most of the time, with the compromise, things go pretty well so that there is only a single iteration involving minor corrections (e.g., adding the pin identifier to the event with the pin state). Occasionally there is a major glitch. But these usually reflect a basic problem with the abstractions so that there is no reasonable way that the indicated high level responsibilities can be carried out. For example, one might discover that Test should not be loading pin states but the Pattern object should do this. > This would imply that the state machine can only be produced in > conjunction > with the OCM, this ensures that a state machine only generates an > event if and only if it is required to meet the object's interface as > defined by > the OCM. Note that this also implies that if the OCM changes insofar as > an object's interface changes by producing more or less events the state > machine must also change. In fact, I agree. I am simply advocating, in the compromise approach, that one keep that OCM knowledge at a high level of abstraction. So one defines the _sort_ of data rather than the specific data packets and events. This allows the second pass (designating event generation) to do things like splitting the data flow into two or more events because this is more convenient for the state models to handle without disturbing their initial, independent design very much. This last point is what I see as the advantage of not using a complete OCM to drive the state model designs. If one defines the OCM fully first, that may alter the way the state models are done in a manner that focuses on the most convenient way to handle a particular flow of control. Then one may encounter a problem when doing maintenance to handle a different variation on flow of control. My assertion is, albeit with some hubris, that if the life cycle model is designed as closely as possible to a view of the intrinsic responsibilities of the object, then it will be more maintainable when the domain must provide new functionality. > The question would then be should the analyst look along the path of the > event on the OCM and identify the destination for the event. If the > analyst > does not, but allows the architecture to sort out the destination, then > the state machine will be immune to OCM changes that affect only the > object that receives the event. Actually, one could imagine an even more complex mechanism where there is no iteration as I indicate above. The designer of Test, for example, simply generates events that will satisfy the rough requirements for external communication. Then these are translated by some sort of bridging mechanism into the transition events in the Channel state model. This could be automated in the architecture to the same extent that domain bridges are automated. Then building truly independent state models would be feasible if one removes the navigation specifics from the state model actions and introduces such a bridging mechanism. Of course there might be a small performance problem, but that's an engineering detail. B-) > However a mechanism would then be > required to identify that the instance of event E1 generated by A1 was > for B1 and not B2. Currently this is performed within the state, it could > be argued that this is hiding the rules of your model. The use of a > mechanism as suggested by Whipp could allow the identification of > the instances that communicate by event. Thus the state machines need > not know where the events generated are going and this information > would be available earlier in the domain development cycle and modelled > in a more visible fashion. Regardless of my pie-in-the-sky notion above, I think I would prefer to see the mapping of events explicitly and clearly in the OOA. It is, after all, the flow of control for the domain. But I think two distinct things need to be shown. One is the relationship navigation that defines the specific destinations. The other is the flow of events. The problem is that these things are interesting in different contexts. One wants to see the flow of control all at once while one is usually interested in the navigation for a specific event. My Grand Vision for this have the OCM enhanced to show self-directed events. With good event naming conventions that would tell one a great deal about flow of control in the domain on one diagram. With a fancy-schmancy CASE tool, one could be a single right-click away from the navigation information for an event in the OCM or in a state model. Similarly a right-click on a relationship in the OIM could bring up the events that require that navigation. [No, Dave, I am not suggesting the information be defined in several places, merely that it be displayed in several places.] -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Hold event after transistion lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Riemenschneider... > I disagree about there only being one (instance that is). I agree with Simonson's discussion on this. For my part, the fact that the supertype and subtype have identical identifiers means that only one can be instantiated outside an action or else Normal Form will be violated. > I've also noticed > that OL:MWS pg. 57 seems to conflict with what I've stated above. Under > "Subtype Migration" it suggests that an instance of a supertype can migrate > between subtypes, so if you migrated, and deleted the previous subtype > instance, according to OL:MWS pg. 46 you would have to delete the supertype > instance. Unfortunately the discussion of the models around pg 57 also strongly suggests that a supertype can have its own state machine (i.e., there is an Harel-like inheritance of function). A few years ago Sally Shlaer clarified that was not the intent. The supertype state model is really just a notational convenience to eliminate the need to duplicate the same states and transitions in each subtype model. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Hold event after transistion lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Simonson... > Not in my universe (or toolset as the case may be), our bridges are 'analyzed' in SDFDs. No magic allowed. [Bridging is done via wormholes. Each wormhole has a corresponding receive SDFD in the other domain's interface bridge.] The N 'states' to receive and resend the events just move into the bridge SDFDs. > > The Client has a wormhole to a server domain with an async return of an event. So far we have a bridge SDFD for the server domain. To preprocess an event on receipt would require a bridge SDFD on the client domain as well. > > What you have proposed, I think, is that the bridge be allowed to have a single SDFD which does some processing and then resends whatever event caused it execute. This is again the same thing that was proposed as 'HOLD with Action' except that it is limited to bridges. Your receive SDFD has to have a process to place an event on the queue in the domain. All I am suggesting is that the process be invoked twice, first with E29 and then with En. If the SDFD is hard-wired in your architecture, then you need another architecture. B-) For us this is a common thing to have to do when domain interfaces are mismatched, regardless of issues about migration, etc. within the domain. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Anonymous event notification Gregory Rochford writes to shlaer-mellor-users: -------------------------------------------------------------------- I feel my knee jerking involuntarily.... :) lahman wrote: > > lahman writes to shlaer-mellor-users: > -------------------------------------------------------------------- > This last point is what I see as the advantage of not using a complete OCM to > drive the state model designs. If one defines the OCM fully first, that may > alter the way the state models are done in a manner that focuses on the most > convenient way to handle a particular flow of control. Then one may > encounter a problem when doing maintenance to handle a different variation on > flow of control. My assertion is, albeit with some hubris, that if the life > cycle model is designed as closely as possible to a view of the intrinsic > responsibilities of the object, then it will be more maintainable when the > domain must provide new functionality. > The "intrinsic resposibilities" of the object are defined by the analyst. If the initial requirements for the state model didn't have this new variation of flow of control, how could the analyst have foreseen it? IOW, if the initial "intrinsic resposibilities" of the state model doesn't include the new variation, how does the analyst know it exists? And know to model it? Only in maintenance was the new requirement found, so in maintenence the state model changes. Now what's the definition of hubris again? ;) Subject: Re: (SMU) Anonymous event notification lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > > In my view the fact that the prioritization is at the > > model-to-model level raises it out of the specific > > state model to the OCM level. > > But your fact is not a fact. > > There are 2 rules: > > 1. the ordering of events between any 2 instances is > always preserved. > > This is a model-to-model interaction, and has no > priorities. > > 2. An event generated from a model to itself is _always_ > received before any other event directed at the > receiving instance. > > This is *not* a model-to-model interaction. Sure it is. It is a special case where both models and both state machines happen to be the same. > There is no > legal mechanism within SM by which any other object in > the system can be effected by the presence, or absence, > of a self-directed event. Rule (1) is not effected by > rule (2). True. And your point is...? B-) > There is simply a prolonged period during which the > instance does not receive any events. No other > object can know whether this period is due to a > long action of a single state, or a sequence of > actions joined by self-directed events. The > self-directed event is completely hidden from all > other objects in the system. There is no model-to- > model interaction. > > When an instance issues a self-directed event, it > knows, with probability 1, what event it will > receive next. We are both arguing that the destination should be removed from the action's generator process. Given that, the only way the instance knows that the event is self-directed is if it peeks at the queue manager's tables to see where the analyst assigned the event to go. The whole point of removing the destination from the generator process is so the instance isn't polluted by that context information. I come back to the same example I made awhile back. I identify all my transition events as Tn and all my generated events as Gm in the state models. When an model generates a particular Gi it doesn't know that it happens to map to Tj, which is a transition in its own state model. That would only be known by the analyst when defining the domain's flow of control for the queue manager(s). -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Hold event after transistion "Leslie A. Munday" writes to shlaer-mellor-users: -------------------------------------------------------------------- -- On Wed, 17 Nov 1999 10:29:47 Gregory Rochford wrote: >Gregory Rochford writes to shlaer-mellor-users: >-------------------------------------------------------------------- >When I'm confused, I return to the fundamentals. > >Events are directed to an instance. Not to a subtype or supertype, but >to the instance. >The state machine for the instance can be spread out over the >subtype/supertype hierarchy, >but it's still one (big) state machine. > >If an _instance_ with pending events is deleted, I would consider that >an >analysis error. An event sent to an instance whose state model doesn't >have >a transition for the event is either a can't happen or event ignored. > >(Also, I don't assume events are deleted when an instance is deleted) > >So in answer to 2) > I just wanted to say that I agree, with one small quibble - which is that if an event is sent to an instance and that instance is not expecting that event (because it may be deleted) then I would call that an analysis error. I.e. when analysing a problem, if an event is sent then there must be an instance that can immediately consume that event (no queues). So to reword the above so I agree, it would say 'if an instance is deleted and subsequent events exist directed towards that instance, then I would consider that an analysis error'. I would call that a 'Can't Happen'. Just out of interest for what reason would an analysis model use the 'event ignored' action? Leslie. __________________________________________________________________ Get your own free England E-mail address at http://www.england.com Subject: Re: (SMU) Hold event after transistion Bary D Hogan writes to shlaer-mellor-users: -------------------------------------------------------------------- > Gregory Rochford writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > > When I'm confused, I return to the fundamentals. > > Events are directed to an instance. Not to a subtype or supertype, but > to the instance. It is directed to an instance, but that can be a supertype instance (possibly polymorphic), or a subtype instance. > The state machine for the instance can be spread out over the > subtype/supertype hierarchy, > but it's still one (big) state machine. > > If an _instance_ with pending events is deleted, I would consider that > an > analysis error. I disagree with this statement. It is possible for several events to be pending to an instance and the first one accepted could cause that instance to be deleted. Other events, at that point are meaningless. You can ensure that an instance exists at the time you generate an event to it, but it is not possible to ensure the instance exists when the event is consumed. > An event sent to an instance whose state model doesn't > have > a transition for the event is either a can't happen or event ignored. > > (Also, I don't assume events are deleted when an instance is deleted) > > So in answer to 2) > > If the event directed to an instance is not currently allowed, because > the event is for a subtype that the instance has migrated from, then > I would call that a can't happen. Do you mean an event directed at the subtype (non-polymorphic event)? I don't think this is a Can't Happen since the subtype could have existed when the event was generated, but it migrated prior to the event being consumed. I would consider this a default ignore. > The resolution for polymorphic events to subtype specific events > occurs when the event is consumed by the state machine, so the event > is processed as normal. > So, this means that the event can be accepted by the subtype instance that corresponds to the supertype instance at the time the event is consumed, even if it existed as a different subtype when the event was generated. That makes sense to me. Regards, Bary Hogan Lockheed Martin Tactical Aircraft Systems Fort Worth, TX bary.d.hogan@lmco.com (817) 763-2620 Subject: Re: RE: (SMU) Hold event after transistion "Leslie A. Munday" writes to shlaer-mellor-users: -------------------------------------------------------------------- -- On Wed, 17 Nov 1999 12:24:01 Dana Simonson wrote: >"Dana Simonson" writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >Responding to Lynch, Chris D.: >--------------------------------- > >>> 1) When an object instance is deleted, all pending events to that instance are deleted. > >> I don't know if I agree with this. > >> For one thing, it seems to break the rule that events are never lost >> (i.e., always delivered). It also extremely impractical in some >> architectures (e.g., distributed systems). > >> Can you provide a reference or a rationale? > >Given two objects, one sending an event to the other, the sender cannot know what state the receiver will be in when the event is delivered, nor what states may be processed in the time period between send and receive. Therefore, in the general case, the architecture (or analysis) MUST account for the possibility that an object will be deleted prior to the delivery of the event. The relationship used to get the destination of the event was valid on send of the event but was deleted prior to its delivery. In order to 'analyze this away', the receiver would have to be artificially coupled to the sender's life cycle to know when and how many events it might still receive when it wants to delete itself. In the architecture, the events can be deleted with the instance, or, the event delivery mechanism could discard (ignore) events whose destination is no longer valid. > >Since I don't believe objects should know who is sending them events, or when, this is a basic architecture issue. > >Example: >1) Buck sees doe. >2) Buck shakes horns and scrapes hooves on ground to get doe's attention. (event) >3) Doe gets shot. >4a) Since the doe cannot receive the event from the buck, the world application claims there is something wrong with the analysis of the deer, since it did not handle the event, throws a GPF and blue screens. >-or- >4b) Since the doe cannot receive the event from the buck, the world application ignores the event. > >I prefer the latter, but Microsquish would probably disagree. ;-) But if you're analysing the problem, there is no time delay between the sending and delivery of the event. Hence the receiving instance will be in its current state, the sender shold not be sending it unwanted events (analysis error). So in the above example the Buck sends the event to the Doe, while it's alive, the Doe gets shot, before responding to the event, the doe may not have time to see the buck shaking and a scraping, but the fact was that the sound reached the doe is enough to say that the event was received. I think the example I would give would be that if the doe is shot and the buck still sends messages to the doe while it's lying on the ground dead, then that may be an example of sending events to a deleted instance. The answer then is 4a, the Buck shold be fixed because it's sending signals to dead does. Leslie. __________________________________________________________________ Get your own free England E-mail address at http://www.england.com Subject: (SMU) RE: Deleting pending events when instance is deleted (new "Lynch, Chris D. SDX" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Simonson: ------------------------------- >>> 1) When an object instance is deleted, all pending events to that instance >are deleted. >> I don't know if I agree with this. >> For one thing, it seems to break the rule that events are never lost >> (i.e., always delivered). It also extremely impractical in some >> architectures (e.g., distributed systems). >> Can you provide a reference or a rationale? >Given two objects, one sending an event to the other, the sender >cannot know what state the receiver will be in when the event is >delivered, nor what states may be processed in the time period >between send and receive. Therefore, in the general case, the >architecture (or analysis) MUST account for the possibility that an >object will be deleted prior to the delivery of the event. The >relationship used to get the destination of the event was valid on >send of the event but was deleted prior to its delivery. In order to >'analyze this away', the receiver would have to be artificially >coupled to the sender's life cycle to know when and how many events >it might still receive when it wants to delete itself. In the >architecture, the events can be deleted with the instance, or, the >event delivery mechanism could discard (ignore) events whose >destination is no longer valid. > >Since I don't believe objects should know who is sending them events, >or when, this is a basic architecture issue. > >Example: >1) Buck sees doe. >2) Buck shakes horns and scrapes hooves on ground to get doe's attention. >(event) >3) Doe gets shot. > >4a) Since the doe cannot receive the event from the buck, the world >application claims there is something wrong with the analysis of the >deer, since it did not handle the event, throws a GPF and blue >screens. > >-or- > >4b) Since the doe cannot receive the event from the buck, the world >application ignores the event. > >I prefer the latter, but Microsquish would probably disagree. ;-) This reminds me of a program I once debugged that was getting Divide By Zero errors. I called the author and asked him about it. He said, "Is that a problem?" I said yes, this thing keeps getting kicked off the mainframe and I have to start over. He said, "funny, my machine produces zero when you divide by zero, and that works fine for this system. Just find all the places where this can happen and put IF statements around them." (!) I was skeptical that this would yield correct answers, and I wondered how many goofy results came out of that machine unnoticed. By quietly tossing events which maybe never should have been sent, your architecture is taking away the decision from the analyst, who might *want* an error message when his system accidentally references an object which either (a) never existed or (b) has been deleted. (E.g. a deposit to a bank account that has been deleted.) Your rule also breaks two of my architectures in which it would be *extremely* inconvenient to hunt down the pending events and delete them. (On one of them, they are on mag tape.) (BTW, do you ignore events to all non-existing instances, whatever the reason?) -Chris --------------------------------------------------- Chris Lynch Abbott AIS San Diego CA lynchcd@hpd.abbott.com "If you're as clever as you can be when you design it, how will you ever debug it?" Kernighan and Plauger --------------------------------------------------- Subject: RE: (SMU) Anonymous event notification David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- [I appologize to anyone who feels that I'm arguing over minutiae, but I do feel the detail is important: even analysis must pay attention to details!] lahman replied: > > This is *not* a model-to-model interaction. > > Sure it is. It is a special case where both models and > both state machines happen to be the same. That is semantic quibbling. > > There is no > > legal mechanism within SM by which any other object in > > the system can be effected by the presence, or absence, > > of a self-directed event. Rule (1) is not effected by > > rule (2). > > True. And your point is...? B-) [In fact, its not quite true. If you use the interleaved interpretation of time then there is a loophole]. But my point is... > > When an instance issues a self-directed event, it > > knows, with probability 1, what event it will > > receive next. > > We are both arguing that the destination should be > removed from the action's generator process. Actually, I'm arguing that self-directed events are a special case where the destination should be part of the action's generator process. I sense this debate going full circle :-(. Furthermore, I wish to make the distinction between events which are explicitly send to self, and those that are coincidently sent to self. This latter category includes events routed along reflexive relationships and along longer relationship loops. If such a loop happens to lead to in the generating instance then I do not regard this as a self-directed event, even though the event is eventually delivered to the sender. Such events should not be given priority. > I come back to the same example I made awhile back. I > identify all my transition events as Tn and all my > generated events as Gm in the state models. When an > model generates a particular Gi it doesn't know that it > happens to map to Tj, which is a transition in its own > state model. (I'm not sure whether your Tn, Tj refer to transitions or events: below, I use Tj to refer to a transition) I would be happy to replace a "generate self-directed event (Gm)" process with a "do transition (Tj)" process. (aka "goto") This would eliminate the confusion: a self directed event is not an event: it is an abuse of the event mechanism within SM that forces a specific transition. So lets not call it an event. On the wider point, I'm not completely happy with mapping event names within a domain. I'd like to be able to give each fact in a subject matter to exactly one thing in the domain model. To borrow an example: I don't see why the "Doe is Dead" fact should have different events in the Doe, Buck, Hunter and Poacher objects. The act of defining a single event for all 4 objects would not couple the objects to each other: it'd couple them all to the event. Within I domain, I see no problem with that. Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: Re: RE: (SMU) Hold event after transistion Bary D Hogan writes to shlaer-mellor-users: -------------------------------------------------------------------- > "Leslie A. Munday" writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > But if you're analysing the problem, there is no time delay between the sending and delivery of the event. Hence the receiving instance will be in its current state, the sender shold not be sending it unwanted events (analysis error). > When I took the state model class from PT (long ago), the instructor told us that we could think of events as sticky notes that get stuck up on a board. The State model then consumes these one at a time by pulling them off the board and acting on them or ignoring them. >From OL:MWS, page 47: "If an event is generated to an instance that is currently executing an action, the event will not be accepted until after the action is complete." There could be many events generated to an instance while it is executing an action. These events are not lost. So, even at the analysis level, there can be some unknown period of time that elapses between the generation of an event and the consumption. Bary Hogan Lockheed Martin Tactical Aircraft Systems Fort Worth, TX bary.d.hogan@lmco.com (817) 763-2620 Subject: RE: (SMU) Hold event after transistion David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- > I think the example I would give would be that if the doe is > shot and the buck still sends messages to the doe while it's > lying on the ground dead, then that may be an example of > sending events to a deleted instance. > > The answer then is 4a, the Buck shold be fixed because it's > sending signals to dead does. Alternatively, you fix the Doe to include a state named "Dead". In the "Dead" state, the Doe ignores all events. You can extend this a bit further (I beleive the KC simulator does!) to say that objects are never deleted: they simply enter a state of non-existance. In this state, all events are either ignored or are can't happen. There will often be a fuzzy time, when a deletion event is in a queue, when the object exists but might not exist when later events are delivered (But events could overtake the deletion event!). The only way to completely define the behaviour is to define the deleted states as part of the OOA, and thus never actually delete the instances. You can then realise that the attribute-domains of the identifiers of all the objects in the system define all possible instances within the system. In deterministic architectures this fact can be very useful. Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: RE: (SMU) ESP Article (Kennedy-Carter's HOLD in State Transition David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- > How would the assigner be employed? Well, in the case of high and low level objects, a low level object might want to find an instance to handle its error condition; or a high level object may want to find an available low level object. If there are other, helper, objects in the middle, then these may also be assigned. If there's a rigid 1:1 relationship between the high and low level objects then this doesn't help: it depends on your model. Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: RE: (SMU) Hold event after transistion Bary D Hogan writes to shlaer-mellor-users: -------------------------------------------------------------------- > > David.Whipp@infineon.com writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > > > I think the example I would give would be that if the doe is > > shot and the buck still sends messages to the doe while it's > > lying on the ground dead, then that may be an example of > > sending events to a deleted instance. > > > > The answer then is 4a, the Buck shold be fixed because it's > > sending signals to dead does. > > > Alternatively, you fix the Doe to include a state named "Dead". > In the "Dead" state, the Doe ignores all events. > > You can extend this a bit further (I beleive the KC simulator > does!) to say that objects are never deleted: they simply > enter a state of non-existance. In this state, all events are > either ignored or are can't happen. There will often be a > fuzzy time, when a deletion event is in a queue, when the > object exists but might not exist when later events are > delivered (But events could overtake the deletion event!). > The only way to completely define the behaviour is to > define the deleted states as part of the OOA, and thus > never actually delete the instances. > Someone mentioned a bank account example in which a deposit event could be received after the account was deleted. Maybe in that situation, the account should just be in a closed state and not deleted so it can react to the deposit event and initiate some appropriate action. However, in most applications that I've seen, when an instance is deleted, then nothing else matters (as it should be). For example, if I have a target object (DOD application), and that target goes away and the instance is deleted, then any pending events to that instance should not matter. If they do, then I better go back to the drawing board and re-think the control. If I never really deleted the instances, then the architect would have a hard time allocating enough memory. :-) Bary Hogan Lockheed Martin Tactical Aircraft Systems Fort Worth, TX bary.d.hogan@lmco.com (817) 763-2620 Subject: RE: (SMU) Hold event after transistion lschneid@eng.delcoelect.com (Lee Riemenschneider) writes to shlaer-mellor-users: -------------------------------------------------------------------- David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- >> I think the example I would give would be that if the doe is >> shot and the buck still sends messages to the doe while it's >> lying on the ground dead, then that may be an example of >> sending events to a deleted instance. >> >> The answer then is 4a, the Buck shold be fixed because it's >> sending signals to dead does. > >Alternatively, you fix the Doe to include a state named "Dead". >In the "Dead" state, the Doe ignores all events. I don't think it was ever stated that the doe didn't have a dead state, but I really don't think that a buck should be sending events to a dead doe. Some event should be generated to say that the doe no longer "exists", just as some event would have to have been directed to the buck to indicate the existance of a doe. Otherwise, the buck can be assumed to just be sending "signals" all of the time whether a doe exists or not. =========================================================================== Lee W. Riemenschneider lee.w.riemenschneider@delphiauto.com Software Engineer OS/2 user PURDUE alumnus and die-hard FAN! =========================================================================== Subject: Re: (SMU) Hold event after transistion lschneid@eng.delcoelect.com (Lee Riemenschneider) writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- >Responding to Riemenschneider... >> I disagree about there only being one (instance that is). > >I agree with Simonson's discussion on this. For my part, the fact that the supertype and >subtype have identical identifiers means that only one can be instantiated outside an >action or else Normal Form will be violated. > Maybe I'm missing something, but are you saying that only one instance of a supertype-subtype can exist at a time? I would think you could have an instance of supertype-subtypeA and an instance of supertype-subtypeB at the same time. Maybe I'm wrong. =========================================================================== Lee W. Riemenschneider lee.w.riemenschneider@delphiauto.com Software Engineer OS/2 user PURDUE alumnus and die-hard FAN! =========================================================================== Subject: Re: (SMU) Hold event after transistion "Dana Simonson" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to lahman... >> Not in my universe (or toolset as the case may be), our bridges are 'analyzed' in SDFDs. No magic allowed. [Bridging is done via wormholes. Each wormhole has a corresponding receive SDFD in the other domain's interface bridge.] The N 'states' to receive and resend the events just move into the bridge SDFDs. >> >> The Client has a wormhole to a server domain with an async return of an event. So far we have a bridge SDFD for the server domain. To preprocess an event on receipt would require a bridge SDFD on the client domain as well. >> >> What you have proposed, I think, is that the bridge be allowed to have a single SDFD which does some processing and then resends whatever event caused it execute. This is again the same thing that was proposed as 'HOLD with Action' except that it is limited to bridges. >Your receive SDFD has to have a process to place an event on the queue in the domain. All I am suggesting is that the process be invoked twice, first with E29 and then with En. >If the SDFD is hard-wired in your architecture, then you need another architecture. B-) For us this is a common thing to have to do when domain interfaces are mismatched, regardless of issues about migration, etc. within the domain. If there is a one-to-one mapping, a receive SDFD may not be needed. If the Client domain has a wormhole with an async return (double wall bubble with an event output) then there need not be a receive SDFD. The wormhole defines the return event destination and the data to place on that event (which can be a combination of data from the local and remote sides of the wormhole). Therefore, to get 20 external events which indicate things happened, I need 20 wormholes and no receive SDFDs. If however, I need to modify the return, the wormhole event has to be intercepted by a receive SDFD which can do things like modify parameters throw extra events etc., but now I have my 20 extra states back. Back to the issue: We've gone off on a detailed discussion of the specific analysis of the model relating to an example. My real discussion point was really meant to be the philosophical issue of how do you decompose what exactly the STT entries mean if HOLD is allowed. Going back to the response tables, before HOLD it looked like: Monospaced font: Transition Exception TRANSITION true false IGNORED false false CAN"T HAPPEN false true 1 ??? true true Events are always consumed. A transition on an exception condition, while interesting, is not part of 'normal mode' analysis. So, I'm not too concerned that it is not defined. Adding hold requires the addition of considering whether to consume the event. Consume Transition Exception HOLD false false false HOLD w/ACTION false true false 2 ??? false false true 3 ??? false true true What I am after is a definitive rule on whether the non-exception case of HOLD with Action should be allowed and why. Both modes of the HOLD (with and without action) are purely for model simplification. They do not add anything to the usefulness of the method. This is purely an aesthetic judgement as to whether the extra transitions, in the case of the HOLD, or extra states, in the case of the HOLD with action, reveal important facts about the problem space or whether they only clutter up the diagrams potentially becoming a distraction from the analysis of the problem space. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Dana Simonson Section Manager, Engineering Waseca Operations Center E. F. Johnson Company dsimonson@efjohnson.com www.efjohnson.com Subject: Re: (SMU) ESP Article (Kennedy-Carter's HOLD in State Transition "David Harris" writes to shlaer-mellor-users: -------------------------------------------------------------------- Lynch, Chris D. SDX wrote > > My motivation to join this thread was to disagree with the notion > that event priorities (in the SM formalism) are a necessary addition > to SMOOA. I contend that it puts the method on the slippery slope > to arbitrary event priorities and a loss of a fundamental property > of the method (guaranteed event ordering between instances.) > I would argue that the guaranteed event ordering is not affected by the use of HOLD. The order that events are made available to the receiving instance is maintained. The HOLD allows the receiving instance to choose the order that it removes the events from the queue. The question may then become does the ordering rule apply to the transportation mechanism of the events between instances or the order that the events must be consumed? My interpretation is the former although the rule was written prior to the suggestion of the HOLD so clarification may be required. As to un-necassarily extending the notaion of the method I agree that any additions should be resisted. A major benefit of SMOOA is its small vocabulary, but if something is missing then its addition must be considered. Dave Harris Subject: Re: (SMU) ESP Article (Kennedy-Carter's HOLD in State Transition "David Harris" writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > Responding to Lynch... > > > On a meta-method note, if I find myself chafing under the limitations > > of a particular method (such as SMOOA), I use another > > method. This helps to keep me out of the "misfit method" syndrome. > > The other side of the coin is that I haven't seen any situations where I couldn't > model what I wanted to model. That might be because our applications don't push > the envelope very much (we have only two domains that actually have asynchronous > events). Or it might be that the notation is sufficient. I have to say that until I used a tool that provided HOLD I do not recall that I needed it. However as I am currently using a tool that supports it I consider it to be another construct available for use. At the same time in most instances that I have seen HOLD used it is neither necessary nor does it simplify the state machines concerned. Dave Harris Subject: Re: (SMU) RE: Deleting pending events when instance is deleted "Dana Simonson" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lynch: ------------------------------- > By quietly tossing events which maybe never should have > been sent, your architecture is taking away the decision > from the analyst, who might *want* an error message > when his system accidentally references > an object which either (a) never existed or (b) has been deleted. > (E.g. a deposit to a bank account that has been deleted.) > Your rule also breaks two of my architectures > in which it would be *extremely* inconvenient to > hunt down the pending events and delete them. > (On one of them, they are on mag tape.) > (BTW, do you ignore events to all non-existing instances, > whatever the reason?) Unless the parallel discussion thread eventually results in destinationless events, you can never send an event to an instance that does not exist. You have to navigate the relationship and find a destination. If, however, in the life cycle of the destination, the object can get deleted, there is no way to know whether there are pending events for that object. Short of doing something really kludgy like never deleting objects, or requiring the architecture to note that the delete occurred and hold the object in limbo until all currently pending events have been processed, deleting pending events with the object seems to be the cleanest approach. If there are objects which get deleted, that need to know if an event occurs, then that should be analyzed. Example, think of a telephone number. For a period of time, say 6 months, after you disconnect your phone, you can have a recording played when someone calls that number giving them the new phone number. After that time, if they dial that number it will be 'not in service', until, that number is reassigned to a new account. After that point, when the number is dialed you get the new owner of that number. This is part of the problem space and should be analyzed. Our architectures delete pending events with the object. If an 'establish connection' event was directed to a phone account that was closed it would respond with the 'play new number message' event. If that same 'establish connection' event was directed to a phone account 6 months later, and after the event was sent the account was deleted, the event would be deleted with the account. The account is no longer capable of handling the event. This would result in the fail-safe delayed event that causes the message back to the dialer 'your call did not go through' to get dequeue when the timer expired. If the number is dialed again, there would be no account instance to direct it to, so the dialed digits object would send a 'play not in service message' event. If an event is dequeued with no destination, the architecture posts an error message indicating that the instance of object X with identifier Y was not found. The existence of an event with a non-existent destination is considered an error. (An NT architecture may put a message in the event log and keep running, embedded architecture with no output method may just restart itself.) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Dana Simonson Section Manager, Engineering Waseca Operations Center E. F. Johnson Company dsimonson@efjohnson.com www.efjohnson.com Subject: Re: (SMU) Hold event after transistion "Dana Simonson" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Riemenschneider... >>> I disagree about there only being one (instance that is). >> >>I agree with Simonson's discussion on this. For my part, the fact that the supertype and >>subtype have identical identifiers means that only one can be instantiated outside an >>action or else Normal Form will be violated. >> >Maybe I'm missing something, but are you saying that only one instance of a >supertype-subtype can exist at a time? I would think you could have an >instance of supertype-subtypeA and an instance of supertype-subtypeB at the >same time. Maybe I'm wrong. I realize this was directed to lahman, but I'm sure he won't hesitate to also respond. There can bve multiple instances of the object tree, but no part (subtype or supertype) of a single instance can exist without the other after the end of the action. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Dana Simonson Section Manager, Engineering Waseca Operations Center E. F. Johnson Company dsimonson@efjohnson.com www.efjohnson.com Subject: Re: RE: (SMU) Hold event after transistion "Dana Simonson" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Munday: >But if you're analyzing the problem, there is no time delay between the sending and delivery of the event. Hence the receiving instance will be in its current state, the sender should not be sending it unwanted events (analysis error). If your analyzing the problem, you don't know or care if the events are received. The buck sends the event, gets no response, so he walks away (assuming he is polite). This could be because the doe is dead, or she is just not interested. >So in the above example the Buck sends the event to the Doe, while it's alive, the Doe gets shot, before responding to the event, the doe may not have time to see the buck shaking and a scraping, but the fact was that the sound reached the doe is enough to say that the event was received. No it is not! Micro analyze this. The bullet is just breaking the skin when the buck sends the event. The doe is alive and well and fully capable of receiving the event. The doe then receives the bullet event and dies instantly. It is no longer capable of receiving and processing the event from the buck. The event must be deleted by the world application along with the doe or the event is left in limbo. >I think the example I would give would be that if the doe is shot and the buck still sends messages to the doe while it's lying on the ground dead, then that may be an example of sending events to a deleted instance. >The answer then is 4a, the Buck should be fixed because it's sending signals to dead does. My point is that the buck should be given the possibility of responding to the non-receipt of the event, dead or ignored, and walking away, rather than having the whole world cease to exist since the event could not be delivered. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Dana Simonson Section Manager, Engineering Waseca Operations Center E. F. Johnson Company dsimonson@efjohnson.com www.efjohnson.com Subject: Re: (SMU) Hold event after transistion lschneid@eng.delcoelect.com (Lee Riemenschneider) writes to shlaer-mellor-users: -------------------------------------------------------------------- "Dana Simonson" writes to shlaer-mellor-users: -------------------------------------------------------------------- >Responding to Riemenschneider... >>>> I disagree about there only being one (instance that is). >>> >>>I agree with Simonson's discussion on this. For my part, the fact that the supertype and >>>subtype have identical identifiers means that only one can be instantiated outside an >>>action or else Normal Form will be violated. >>> >>Maybe I'm missing something, but are you saying that only one instance of a >>supertype-subtype can exist at a time? I would think you could have an >>instance of supertype-subtypeA and an instance of supertype-subtypeB at the >>same time. Maybe I'm wrong. > >I realize this was directed to lahman, but I'm sure he won't hesitate to also respond. > >There can bve multiple instances of the object tree, but no part (subtype or supertype) of a single instance can exist without the other after the end of the action. > Which doesn't disagree with what I've been saying (I think :-)). Maybe what we have here is a failure to communicate. I don't know how we may have gotten here, because I haven't been saving the messages. :-) =========================================================================== Lee W. Riemenschneider lee.w.riemenschneider@delphiauto.com Software Engineer OS/2 user PURDUE alumnus and die-hard FAN! =========================================================================== Subject: RE: (SMU) ESP Article (Kennedy-Carter's HOLD in State Transition David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- "David Harris" wrote: > As to un-necassarily extending the notaion of the method I > agree that any additions should be resisted. A major benefit > of SMOOA is its small vocabulary, but if something is missing > then its addition must be considered. I will agree that unnecessary extentions should be resisted. But I'm also a beleiver in continuous improvement. I tend to look at modifications in terms of shuffling the pieces and cleaning up inconsistancies/ambiguities; with the occasional radical idea to keep things interesting. Perhaps the only thing worse than a method/notation that attempts to do everything is a language that is dead, and which never adjusts to the experiences of its users. Dave. Subject: RE: (SMU) ESP Article (Kennedy-Carter's HOLD in State Transition "Lynch, Chris D. SDX" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Harris: --------------------------- >> My motivation to join this thread was to disagree with the notion >> that event priorities (in the SM formalism) are a necessary addition >> to SMOOA. I contend that it puts the method on the slippery slope >> to arbitrary event priorities and a loss of a fundamental property >> of the method (guaranteed event ordering between instances.) >> > >I would argue that the guaranteed event ordering is not affected by the >use of HOLD. The order that events are made available to the receiving >instance is maintained. The HOLD allows the receiving instance to >choose the order that it removes the events from the queue. The question >may then become does the ordering rule apply to the transportation >mechanism of the events between instances or the order that the events >must be consumed? My interpretation is the former although the rule was >written prior to the suggestion of the HOLD so clarification may be required. Your distinction between transport, offering, reception of events seems to be dependent on architectural details of the typical architecture, but not always feasible, and so for me raises flags. For example, if I want to implement a state model in gates, implementing HOLD could be a nasty problem. Also, I think the method doesn't say anything about transportation, but, rather, that when A speaks to B, the words come in order, are consumed on arrival, and are not lost. The usual inference is that a FIFO queue is implied and that no reordering of events takes place in the reception (and processing) of events. -Chris Subject: RE: (SMU) RE: Deleting pending events when instance is deleted (n David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- "Dana Simonson" wrote: > Unless the parallel discussion thread eventually results in > destinationless events, you can never send an event to an > instance that does not exist. You have to navigate the > relationship and find a destination Actually, that's not quite true. That is an interpretation by some case tools of the event generation mechanism. It is not a requirement of the method. If you look at OOA96, section 5.2, page 23, the syntax defiition clearly leave the door open to sending events to non-existing instances. For example: E1: event meaning (identifying attributes;supplemental data) Note that the first part is "identifying attributes", not "handler to instance". This means that if I have an object: Foo(*id : integer); then I can send an event "F1: bar(3;)" without first checking that instance Foo(3) exists, and without following any relationship to it. (BTW, my proposal in the parallel thread would force event routing to follow the relationship). This may be an analysis error, but it is not prevented by the method. Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: RE: (SMU) Hold event after transistion David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- "Dana Simonson" wrote: > There can bve multiple instances of the object tree, but no > part (subtype or supertype) of a single instance can exist > without the other after the end of the action. A consistent model requires all the instances in the path, from root to leaf, to exist. However, an action is not required to leave the system in a consistent state at the time of the end of the action. OL:MWS, page 106 ("Rules about Consistent Data"): "1. When an action completes, it must leave the system consistent, either by writing data to paint a consitent picture or by generating events to cause other state machines to come into conformance with the data changes made by the sender of the event" So, at the end of an action, one subtype my generate an event that causes another subtype (of the same supertype-instance) to delete itself. Alternatively, a deletion state of one subtype can send a creation event to another subtype. "2. It the the responsibility of the analyst to ensure that the data used by an action is consistent" In other words, if you know that you're using events to bring about consistency, then you must provide synchronization to prevent the use of inconsistent data. Dave. p.s. I still think its a mistake to thing of the entire tree as a single instance in the analysis. It adds complexity for no benefit. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: RE: (SMU) ESP Article (Kennedy-Carter's HOLD in State Transition David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- > "Lynch, Chris D. SDX" wrote: > For example, if I want to implement a state > model in gates, implementing HOLD could be a nasty problem. Not really. Its simply extra state. You'll need to transform the state model into a h/w state machine anyway; and its possible that a single action could take multiple clock cycles. As soon as you have the infrastructure to handle that, adding the extra state for hold events is trivial. That said, I've never used the HOLD mechanism. Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: Re: (SMU) Hold event after transistion "Dana Simonson" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Riemenschneider... >>There can be multiple instances of the object tree, but no part (subtype or supertype) of a single instance can exist without the other after the end of the action. > >Which doesn't disagree with what I've been saying (I think :-)). Maybe what >we have here is a failure to communicate. I don't know how we may have gotten >here, because I haven't been saving the messages. :-) Ignoring the valid and correct clarification Whipp offered on the possibility of asynchronous migration, the crux of the difference is not the lack of existence of a part of the tree, but, the fact that you can direct an event to either the subtype or to the supertype (polymorphic event). I can think of no reason to ever leave an event destined to a subtype in the system (at least as far as the analysis views it) after the object is deleted. However, if the object instance is deleted and there exists an event destined to the supertype, it may be advantageous to leave that event in the queue if the delete was part of a migrate. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Dana Simonson Section Manager, Engineering Waseca Operations Center E. F. Johnson Company dsimonson@efjohnson.com www.efjohnson.com Subject: Event Queues WAS:(RE: (SMU) Hold event after transistion) "Leslie A. Munday" writes to shlaer-mellor-users: -------------------------------------------------------------------- -- On Wed, 17 Nov 1999 20:09:17 Bary D Hogan wrote: >Bary D Hogan writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >> "Leslie A. Munday" writes to shlaer-mellor-users: >> -------------------------------------------------------------------- >> >> But if you're analysing the problem, there is no time delay between the sending and >delivery of the event. Hence the receiving instance will be in its current state, the sender >shold not be sending it unwanted events (analysis error). >> > >When I took the state model class from PT (long ago), the instructor >told us that we could think of events as sticky notes that get stuck >up on a board. The State model then consumes these one at a time by >pulling them off the board and acting on them or ignoring them. > >From OL:MWS, page 47: "If an event is generated to an instance that >is currently executing an action, the event will not be accepted until >after the action is complete." > >There could be many events generated to an instance while it is >executing an action. These events are not lost. > >So, even at the analysis level, there can be some unknown period of time >that elapses between the generation of an event and the consumption. > > See this is where I diverge from S-M. Like I said, I don't use any S-M tools so I am not rigidly bound by S-M rules. [As an aside, I have not had my hands on a S-M tool which I consider appropriate for analysis. I've had this argument with the instructor at PT whose name I forget (to whom I apologise).] When I model events, I treat them as instances in time with no substance. I.e. they cannot be held by sticky notes on whiteboards. If you wish to record an event then you need to set a data variable or change state, immediately. This is how I consider an event queue to work. It is an object instance whose job is to manage the receiving and sending of events. When an event is received it records that event as a data item. When an object instance is ready to receive an event the event queue object will send the appropriate event to the object instance. The fact that I can capture and analyse any complete set of functional requirements without an event queue suggests to me that the 'event queue manager' is a pollution of the analysis model. I.e. There is no requirement for it! The third paragraph states that an event is not processed until the action is complete. See I absolutely adhor this S-m rule. It stinks of implementation. If my state model receives an event, it stops what it is doing and processes the event. That means that if it causes a transition, then whatever action was being executed is aborted, and the instance changes state and starts executing the new action. If the event is ignored, then the instance continues executing the action in its current state. If the action is 'Can't happen' then an exception is generated. Using this method one does not need the 'break' statement to exit a state action. You want to exit a loop because you've finished your work, send yourself an event. The instance immediately aborts all actions and transitions to another state. So much more elegant than using command line instructions to break out of an action. So in summary, when you read my comments please note that they are remarks on the analysis model, not necessarily on the S-M method. Leslie. __________________________________________________________________ Get your own free England E-mail address at http://www.england.com Subject: RE: (SMU) Hold event after transistion "Leslie A. Munday" writes to shlaer-mellor-users: -------------------------------------------------------------------- -- On Wed, 17 Nov 1999 18:10:23 David.Whipp wrote: >David.Whipp@infineon.com writes to shlaer-mellor-users: >-------------------------------------------------------------------- > > >> I think the example I would give would be that if the doe is >> shot and the buck still sends messages to the doe while it's >> lying on the ground dead, then that may be an example of >> sending events to a deleted instance. >> >> The answer then is 4a, the Buck shold be fixed because it's >> sending signals to dead does. > > >Alternatively, you fix the Doe to include a state named "Dead". >In the "Dead" state, the Doe ignores all events. > >You can extend this a bit further (I beleive the KC simulator >does!) to say that objects are never deleted: they simply >enter a state of non-existance. In this state, all events are >either ignored or are can't happen. There will often be a >fuzzy time, when a deletion event is in a queue, when the >object exists but might not exist when later events are >delivered (But events could overtake the deletion event!). >The only way to completely define the behaviour is to >define the deleted states as part of the OOA, and thus >never actually delete the instances. > Not quite sure what the 'deleted' state is acheiving for us. In the deleted state events are Ignored or Can't Happen. How do we know which? Ignored I think of as the receiving instance staying in its current state and continuing with its processing when the event is received. Can't Happen would be a case of sending an event to an instance that doesn't exist. I.e. an analysis error. Having the instance in a deleted state appears to take away the benefit of being able to detect errors when Can't Happen events occur. Leslie. __________________________________________________________________ Get your own free England E-mail address at http://www.england.com Subject: RE: (SMU) Hold event after transistion David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- "Leslie A. Munday" wrote: > Not quite sure what the 'deleted' state is acheiving for us. If it introduced architecturally as part of a simulation then it provides a focal point for reporting errors when they occur: a user can see which non-existing instance received the event. If introduced architecturally by a simple architecture, they are an easy way of eliminating dangling pointers. If introduced architecturally for a deterministic real-time system, then you can create all the instances at time zero with some in the "does not exist" state -- this eliminates the memory management associated with creating/deleting instances. If introduced in the anaysis, then it eliminates the question about the behaviour of deleted instances (because instances are never deleted!). A clever architecture could implement the state (via coloring) by deleting the object and then checking for null-pointers :-). > In the deleted state events are Ignored or Can't Happen. How > do we know which? Depends. Do you turn off assertions in production code? The method doesn't define which you should use. The architecture should do whatever is appropriate. If you introduce the deleted state in anaysis, then the analyst can decide on a case-by-case basis. > Ignored I think of as the receiving instance staying in its > current state and continuing with its processing when the > event is received. By the time an instance processes any event (even to ignore it), then the action is complete. So there is never any procssing to continue. If you use instance based queuing then it is possible that, on receipt of any event, the instance will pause its processing to queue the event, then complete the action before processing the event. As an optimisation you could ignore the event instead of queueing it, but you probably wouldn't gain anything. I'm not sure how this relates to the "deleted" state issue. > Can't Happen would be a case of sending an event to an > instance that doesn't exist. I.e. an analysis error. That would seem to be a sensible default behaviour, especially in simulation. But is someone wants to flush the queue when an instance is deleted, then all the flushed events are effecively ignored. > Having the instance in a deleted state appears to take away > the benefit of being able to detect errors when Can't Happen > events occur. You can always put "can't happen" entries in all the cells of that row of the STT. There is a design pattern called the "null object" pattern (http://home.att.net/~bwoolf/Null_Object/Null_Object.htm). In this, special "null objects" are used in place of null pointers (or references/handles). These null objects implement all methods in the object's interface so that the client does not need to test for null-pointers. The deleted state is analagous to this pattern. Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: Re: Event Queues WAS:(RE: (SMU) Hold event after transistion) lschneid@eng.delcoelect.com (Lee Riemenschneider) writes to shlaer-mellor-users: -------------------------------------------------------------------- "Leslie A. Munday" writes to shlaer-mellor-users: -------------------------------------------------------------------- >When I model events, I treat them as instances in time with no substance. >I.e. they cannot be held by sticky notes on whiteboards. If you wish to >record an event then you need to set a data variable or change state, >immediately. > >This is how I consider an event queue to work. It is an object instance >whose job is to manage the receiving and sending of events. When an event >is received it records that event as a data item. > >When an object instance is ready to receive an event the event queue object >will send the appropriate event to the object instance. > >The fact that I can capture and analyse any complete set of functional >requirements without an event queue suggests to me that the 'event queue >manager' is a pollution of the analysis model. > >I.e. There is no requirement for it! > >The third paragraph states that an event is not processed until the action >is complete. See I absolutely adhor this S-m rule. It stinks of >implementation. If my state model receives an event, it stops what it is >doing and processes the event. > >That means that if it causes a transition, then whatever action was being >executed is aborted, and the instance changes state and starts executing >the new action. > >If the event is ignored, then the instance continues executing the action >in its current state. > >If the action is 'Can't happen' then an exception is generated. > >Using this method one does not need the 'break' statement to exit a state >action. You want to exit a loop because you've finished your work, send >yourself an event. The instance immediately aborts all actions and >transitions to another state. So much more elegant than using command line >instructions to break out of an action. > While I can't think of a real world requirement that couldn't be handled without an event queue, I have some concerns about analyzing them away. I think that the analysis would be so tightly synchronized that you would incur a very large cost to modify it. i.e., your requirements had better be complete, and your analysis had better be correct. Correct can also mean implementation correct, because a very tight analysis won't allow much room for movement in the implementation. I also think that such a tightly synchronized analysis would take much longer to complete, and for large systems, it may be impossible. =========================================================================== Lee W. Riemenschneider lee.w.riemenschneider@delphiauto.com Software Engineer OS/2 user PURDUE alumnus and die-hard FAN! =========================================================================== Subject: Re: Event Queues WAS:(RE: (SMU) Hold event after transistion) "Dana Simonson" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Munday > See this is where I diverge from S-M. > > Like I said, I don't use any S-M tools so I am not rigidly bound by S-M rules. > > [As an aside, I have not had my hands on a S-M tool which I consider appropriate for analysis. I've had this argument with the instructor at PT whose name I forget (to whom I apologise).] > > When I model events, I treat them as instances in time with no substance. I.e. they cannot be held by sticky notes on whiteboards. If you wish to record an event then you need to set a data variable or change state, immediately. > > This is how I consider an event queue to work. It is an object instance whose job is to manage the receiving and sending of events. When an event is received it records that event as a data item. > >When an object instance is ready to receive an event the event queue object will send the appropriate event to the object instance. > > The fact that I can capture and analyse any complete set of functional requirements without an event queue suggests to me that the 'event queue manager' is a pollution of the analysis model. > > I.e. There is no requirement for it! > > The third paragraph states that an event is not processed until the action is complete. See I absolutely adhor this S-m rule. It stinks of implementation. If my state model receives an event, it stops what it is doing and processes the event. > > That means that if it causes a transition, then whatever action was being executed is aborted, and the instance changes state and starts executing the new action. > > If the event is ignored, then the instance continues executing the action in its current state. > > If the action is 'Can't happen' then an exception is generated. > > Using this method one does not need the 'break' statement to exit a state action. You want to exit a loop because you've finished your work, send yourself an event. The instance immediately aborts all actions and transitions to another state. So much more elegant than using command line instructions to break out of an action. > > So in summary, when you read my comments please note that they are remarks on the analysis model, not necessarily on the S-M method. How do you handle asynchronous real world problems? If an account receives an deposit event while processing a withdrawl event, you can't simply abort the withdrawl. It must complete and the deposit ( and any other pending asynchronous events) must also be processed. The sender of the event, ATM, teller, e-transfer, etc. should have no concern whatsoever about he current state of the account or readiness of it to process the event. I would assume that this could be solved without using events (or queues) but, I wouldn't expect the analysis to even remotely resemble the real world problem, and I wouldn't want to try to use that kind of model to communicate with a customer. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Dana Simonson Section Manager, Engineering Waseca Operations Center E. F. Johnson Company dsimonson@efjohnson.com www.efjohnson.com Subject: Re: (SMU) Anonymous event notification lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Rochford.. > > This last point is what I see as the advantage of not using a complete OCM to > > drive the state model designs. If one defines the OCM fully first, that may > > alter the way the state models are done in a manner that focuses on the most > > convenient way to handle a particular flow of control. Then one may > > encounter a problem when doing maintenance to handle a different variation on > > flow of control. My assertion is, albeit with some hubris, that if the life > > cycle model is designed as closely as possible to a view of the intrinsic > > responsibilities of the object, then it will be more maintainable when the > > domain must provide new functionality. > > > > The "intrinsic resposibilities" of the object are defined by the > analyst. > If the initial requirements for the state model didn't have this new > variation of flow of control, how could the analyst have foreseen it? > IOW, if the initial "intrinsic resposibilities" of the state model > doesn't include the new variation, how does the analyst know it exists? > And know to model it? We are dangerously close to a metaphysical discourse here... I believe that one of the principle benefits of OT, particularly at the OOA level, is that one solves the problem using models that are closely related to the problem space. Thus most objects in an OOA represent real entities (concrete or conceptual) in the problem space. So those abstractions should capture as closely as possible the intrinsic properties of the real world entities. Those intrinsic properties include both attributes and behavior. Though the level of detail is dictated by the level of abstraction of the domain mission, the properties being modeled are invariant. The assertion that OT provides better maintainability rests on the notion that the more closely objects reflect the problem space, the less likely it will be that modifications to the model will be more complex than the modifications to the problem space that led to the maintenance. The corollary is that if one incorporates specific views of the solution into the object abstractions, one runs the risk that maintenance will be more difficult to the extent that those specific solution views do not correspond directly with problem space entities and their properties. So it is not an issue of the analyst anticipating problem space change. It is an issue of the analyst not creating vicarious impediments to future changes by altering the intrinsic abstractions for a favored solution. In my view the flow of control represents the analyst's solution to the problem at hand while the information and state models represent the intrinsic problem space abstractions. This is why I am very reluctant to let a detailed OCM drive the state model design -- one is using a specific solution to define invariants that are more general than a specific solution. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Anonymous event notification Gregory Rochford writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > > lahman writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Responding to Rochford.. > > > > This last point is what I see as the advantage of not using a complete OCM to > > > drive the state model designs. If one defines the OCM fully first, that may > > > alter the way the state models are done in a manner that focuses on the most > > > convenient way to handle a particular flow of control. Then one may > > > encounter a problem when doing maintenance to handle a different variation on > > > flow of control. My assertion is, albeit with some hubris, that if the life > > > cycle model is designed as closely as possible to a view of the intrinsic > > > responsibilities of the object, then it will be more maintainable when the > > > domain must provide new functionality. > > > > > > > The "intrinsic resposibilities" of the object are defined by the > > analyst. > > If the initial requirements for the state model didn't have this new > > variation of flow of control, how could the analyst have foreseen it? > > IOW, if the initial "intrinsic resposibilities" of the state model > > doesn't include the new variation, how does the analyst know it exists? > > And know to model it? > > We are dangerously close to a metaphysical discourse here... Or aesthetics... :) > > I believe that one of the principle benefits of OT, particularly at the OOA level, > is that one solves the problem using models that are closely related to the problem > space. Thus most objects in an OOA represent real entities (concrete or > conceptual) in the problem space. So those abstractions should capture as closely > as possible the intrinsic properties of the real world entities. Those intrinsic > properties include both attributes and behavior. > > Though the level of detail is dictated by the level of abstraction of the domain > mission, the properties being modeled are invariant. The assertion that OT > provides better maintainability rests on the notion that the more closely objects > reflect the problem space, the less likely it will be that modifications to the > model will be more complex than the modifications to the problem space that led to > the maintenance. The corollary is that if one incorporates specific views of the > solution into the object abstractions, one runs the risk that maintenance will be > more difficult to the extent that those specific solution views do not correspond > directly with problem space entities and their properties. > > So it is not an issue of the analyst anticipating problem space change. It is an > issue of the analyst not creating vicarious impediments to future changes by > altering the intrinsic abstractions for a favored solution. In my view the flow > of control represents the analyst's solution to the problem at hand while the > information and state models represent the intrinsic problem space abstractions. > This is why I am very reluctant to let a detailed OCM drive the state model design > -- one is using a specific solution to define invariants that are more general than > a specific solution. > I don't disagree with anything you say. I would use the OCM as a tool to "discover" the intrinsic problem space dynamics. From the description of your process, it seems like the analyst must intuite what the behavior of an object is *in isolation from* other objects in the domain. That seems to a) place too much responsibility on the analyst's intuition. b) ignore object interaction until later (too late IMO) in the process. I can imagine scenarios where two analysts are building the state models for two different objects, and each thought the other was going to take care of some requirement (or they both do it). Or did I miss something in your process description? best gr Subject: RE: (SMU) Hold event after transistion "Leslie A. Munday" writes to shlaer-mellor-users: -------------------------------------------------------------------- -- On Wed, 17 Nov 1999 21:21:43 Bary D Hogan wrote: >Bary D Hogan writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >> > I think the example I would give would be that if the doe is >> > shot and the buck still sends messages to the doe while it's >> > lying on the ground dead, then that may be an example of >> > sending events to a deleted instance. >> > >> > The answer then is 4a, the Buck shold be fixed because it's >> > sending signals to dead does. >> >> >> Alternatively, you fix the Doe to include a state named "Dead". >> In the "Dead" state, the Doe ignores all events. >> >> You can extend this a bit further (I beleive the KC simulator >> does!) to say that objects are never deleted: they simply >> enter a state of non-existance. In this state, all events are >> either ignored or are can't happen. There will often be a >> fuzzy time, when a deletion event is in a queue, when the >> object exists but might not exist when later events are >> delivered (But events could overtake the deletion event!). >> The only way to completely define the behaviour is to >> define the deleted states as part of the OOA, and thus >> never actually delete the instances. >> > >Someone mentioned a bank account example in which a deposit event >could be received after the account was deleted. Maybe in that >situation, the account should just be in a closed state and not >deleted so it can react to the deposit event and initiate some >appropriate action. > But what about all these dead deer lying all over the place? If someone doesn't delete them they'll start smelling bad. Seriously, what's the consequence of leaving records of 'dead' object instances all over the place? Won't something unmanageable happen? Leslie. __________________________________________________________________ Get your own free England E-mail address at http://www.england.com Subject: Re: RE: (SMU) Hold event after transistion "Leslie A. Munday" writes to shlaer-mellor-users: -------------------------------------------------------------------- -- On Thu, 18 Nov 1999 12:07:41 Dana Simonson wrote: >"Dana Simonson" writes to shlaer-mellor-users: >-------------------------------------------------------------------- >>But if you're analyzing the problem, there is no time delay between the sending and delivery of the event. Hence the receiving instance will be in its current state, the sender should not be sending it unwanted events (analysis error). > >If your analyzing the problem, you don't know or care if the events are received. The buck sends the event, gets no response, so he walks away (assuming he is polite). This could be because the doe is dead, or she is just not interested. > >>So in the above example the Buck sends the event to the Doe, while it's alive, the Doe gets shot, before responding to the event, the doe may not have time to see the buck shaking and a scraping, but the fact was that the sound reached the doe is enough to say that the event was received. > >No it is not! Micro analyze this. The bullet is just breaking the skin when the buck sends the event. The doe is alive and well and fully capable of receiving the event. The doe then receives the bullet event and dies instantly. It is no longer capable of receiving and processing the event from the buck. The event must be deleted by the world application along with the doe or the event is left in limbo. > >>I think the example I would give would be that if the doe is shot and the buck still sends messages to the doe while it's lying on the ground dead, then that may be an example of sending events to a deleted instance. > >>The answer then is 4a, the Buck should be fixed because it's sending signals to dead does. > >My point is that the buck should be given the possibility of responding to the non-receipt of the event, dead or ignored, and walking away, rather than having the whole world cease to exist since the event could not be delivered. > Ok, I spent all night analysing this and here's what I came up with. Firstly I want to state that the Buck does get an event response before walking away. It's a time-out event. Buck sends event to doe - buck starts timer - no response from doe ... - timer expires - buck walks away. Secondly is the issue of what constitutes the event from the buck. I am going to assume that an event is instantaneous and therefore the mating ritual of the buck is not the event. This is the supporting action which can be observed by the receiving doe instance. I.e. observable buck state behaviour associated with the event, but not the event itself. So the question is, what is the event from the buck to the doe? I'll address this later, but first.. I'm going to make another assumption, and that is that two events cannot be generated or received at the same time. There will always be some miniscule time difference, meaning that one event is received before the other. Then I got to thinking that we have two cases: a) the event from the buck (whatever that is) is received just before the bullet event, b) or the bullet event is received slightly before the buck event. Furthermore, the doe will not die instantaneously upon receiving the bullet event, but will go into a dying state, whereby the doe instance shutsdown heart and brain functions before being deleted. It is only when an external object can recognise that that the doe is deleted is the doe in a 'Deleted' state. So this comes back to my argument that if the doe is deleted, the buck can recognise this and not send an event. If in case b) the bullet is received first, the doe goes into dying state, the buck doesn't understand this and so sends the mating event, the doe while in dying state, receives the event and responds with 'Event Ignored'. Now this leaves case a). In this case the buck sends the event to a healthy doe. The doe receives a bullet event and starts dying. the buck is confused because the doe should be responding in the healthy state, not dying. It was at about this point that I tried to figure out, what is this event that the buck sends. In order to do this I came up with a similar but simpler analogy. Replace the buck with an archer, the doe with a target, the act of making a mating call with the act of firing the arrow at the target and the response event, from the doe, with a light indicator when the arrow hits the target. Now the event from the archer is the releasing of the arrow towards the target. But this event is not directed at the target, it's directed at the arrow. The arrow changes from 'In Bow' state, to 'In Flight' state. When the arrow hits the target, the target recognises the arrow and responds by illuminating the light which causes an event to be sent to the archer. Ok, so let's go back to our buck and doe problem, with this pattern(?) in mind. The buck wants to attract the does attention, (i.e. wants the does light to illuminate). So it sends an event called 'Create Mating Call'. It's the Mating Call object which is responded to by the doe, not the Buck event. So now we can introduce our time delay between sending and receiving. The Buck creates Mating Call - Mating Call is transmitted to Doe - After being in an 'Exists' state for a certain time, Mating Call is recognized by Doe - Doe sends response event to originating Buck. Look, I don't know if this is agreeing with the original argument or if it's even relevant, but I spent some effort thinking about this so I'm writing it down. Leslie. __________________________________________________________________ Get your own free England E-mail address at http://www.england.com Subject: RE: (SMU) ESP Article (Kennedy-Carter's HOLD in State Transition "Leslie A. Munday" writes to shlaer-mellor-users: -------------------------------------------------------------------- -- On Thu, 18 Nov 1999 10:29:07 David.Whipp wrote: >David.Whipp@infineon.com writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >"David Harris" wrote: >> As to un-necassarily extending the notaion of the method I >> agree that any additions should be resisted. A major benefit >> of SMOOA is its small vocabulary, but if something is missing >> then its addition must be considered. > >I will agree that unnecessary extentions should be resisted. But >I'm also a beleiver in continuous improvement. I tend to look at >modifications in terms of shuffling the pieces and cleaning up >inconsistancies/ambiguities; with the occasional radical idea to >keep things interesting. Perhaps the only thing worse than >a method/notation that attempts to do everything is a language >that is dead, and which never adjusts to the experiences of its >users. > Or to put my take on it - "If it ain't broke don't fix it" Leslie. __________________________________________________________________ Get your own free England E-mail address at http://www.england.com Subject: RE: (SMU) Hold event after transistion Bary D Hogan writes to shlaer-mellor-users: -------------------------------------------------------------------- > > "Leslie A. Munday" writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > > -- > > On Wed, 17 Nov 1999 21:21:43 Bary D Hogan wrote: > >Bary D Hogan writes to shlaer-mellor-users: > >-------------------------------------------------------------------- > > > > > >Someone mentioned a bank account example in which a deposit event > >could be received after the account was deleted. Maybe in that > >situation, the account should just be in a closed state and not > >deleted so it can react to the deposit event and initiate some > >appropriate action. > > > > But what about all these dead deer lying all over the place? If someone doesn't > delete them they'll start smelling bad. > > Seriously, what's the consequence of leaving records of 'dead' object instances > all over the place? Won't something unmanageable happen? > Yes, I don't think it is pratical to leave instances around forever. In the bank account example, I speculate that you might have a system in which you couldn't delete it right away since you might have transactions (events) still destined to it, and the only way to react to those is to leave it around. After some period of time, you should be able to delete it since whoever is sending these events should look for an existing account that is not closed. If it is not found or it is closed, then an appropriate response could be initiated. [Of course, if I was using the "Munday method", I wouldn't have the issue of leaving it around just to respond to events since they couldn't be generated to it once it is closed. Have you thought of writing a book? ;-)] When does a dead deer cease to exist? Where I'm from, they (or parts of them) usually stay around in the freezer for some time. Regards, Bary Hogan Lockheed Martin Tactical Aircraft Systems Fort Worth, TX bary.d.hogan@lmco.com (817) 763-2620 Subject: Re: Event Queues WAS:(RE: (SMU) Hold event after transistion) "Leslie A. Munday" writes to shlaer-mellor-users: -------------------------------------------------------------------- -- On Fri, 19 Nov 1999 08:15:49 Lee Riemenschneider wrote: >lschneid@eng.delcoelect.com (Lee Riemenschneider) writes to shlaer-mellor-users: >-------------------------------------------------------------------- >While I can't think of a real world requirement that couldn't be handled >without an event queue, I have some concerns about analyzing them away. I >think that the analysis would be so tightly synchronized that you would incur >a very large cost to modify it. i.e., your requirements had better be >complete, and your analysis had better be correct. Correct can also mean >implementation correct, because a very tight analysis won't allow much room >for movement in the implementation. > >I also think that such a tightly synchronized analysis would take much longer >to complete, and for large systems, it may be impossible. > You're pretty much correct in all your observations. 1) It forces your analysis model to be complete, just like the resulting code has to be complete. 2) It may take longer, but it means your implementation and design phases are shorter. The biggest/most complicated system I used this method on is a $20M robot control system. It took approximately 2 person-years, and another 6 months and it would have been complete. And yes, it does restrict the implementation, but on a program the size of this robot control system which was being subcontracted out, this was a very GOOD thing. I'm guessing that it does make the analysis much more rigorous, but being an analyst I consider this to be to my benefit. Leslie. __________________________________________________________________ Get your own free England E-mail address at http://www.england.com Subject: RE: (SMU) Hold event after transistion David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- "Leslie A. Munday" wrote: > Seriously, what's the consequence of leaving records of > 'dead' object instances all over the place? Won't something > unmanageable happen? One consequence is that you keep an audit trail. So even though you "deleted" that email, it can still be reincarnated in a court room :-) As I've mentioned before, embedded systems that have hard real-time requirements may want deterministic memory management. If objects are never created nor deleted, then memory management becomes easier. (At time zero, you create all the instances in their states of non-existance) Another consequnce is that you don't get so many segmentation faults (or GPFs): you simply get memory leaks. Of course, a defensive architecture would take care of that for you. A good defensive design may be to convert all [really] deleted objects into a singleton null-object: this uses minimal memory and handles bad accesses in an efficient manner. If you have an application that will accumulate instances indefinitely, then you'd want your architecture to handle the 'deleted' state in a special way. You might colour the state to tell the architecture that its really a deleted state. It could then shove the data values onto disc, and/or free the memory (If its really a dead state, then the data values should never be read and you may decide that the architecture can lose them -- the implementation may even discover that the object is unreachable, and garbage collect it) With a good translator, equivalent implementations should be possible for both analysed "deleted" states and implicit (architectural) deleted states. Dave. Subject: RE: (SMU) ESP Article (Kennedy-Carter's HOLD in State Transition David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- "Leslie A. Munday" wrote: > Or to put my take on it - "If it ain't broke don't fix it" If it ain't broke then, by definition, you can't fix it. There's a difference between "ain't broke" and "perfect" Here's a better algorithm: while ( ! method.isPerfect() ) { method.useIt(); if ( method.isBroke() ) { method.fixIt(); } else { method.changeIt(); } } No method is ever really broken -- its always usable for something. However, no method is perfect either. So I want methods to change. Here's a more accurate algorithm: while (method.exists()) { method.useIt(); if (me.frustrated() || me.bored()) { method.changeIt(); } } Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: RE: RE: (SMU) Hold event after transistion David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- From: Leslie A. Munday [mailto:lmunday@england.com] > Replace the buck with an archer, the doe with a target, the > act of making a mating call with the act of firing the arrow > at the target and the response event, from the doe, with a > light indicator when the arrow hits the target. > > Now the event from the archer is the releasing of the arrow > towards the target. But this event is not directed at the > target, it's directed at the arrow. The arrow changes from > 'In Bow' state, to 'In Flight' state. When the arrow hits the > target, the target recognises the arrow and responds by > illuminating the light which causes an event to be sent to the archer. > > Ok, so let's go back to our buck and doe problem, with this > pattern(?) in mind. > > The buck wants to attract the does attention, (i.e. wants the > does light to illuminate). So it sends an event called > 'Create Mating Call'. It's the Mating Call object which is > responded to by the doe, not the Buck event. > > So now we can introduce our time delay between sending and > receiving. The Buck creates Mating Call - Mating Call is > transmitted to Doe - After being in an 'Exists' state for a > certain time, Mating Call is recognized by Doe - Doe sends > response event to originating Buck. It seems to me that your inintial event is equivalent to an SM event generator process; your event is an SM object with a simple born-and-die lifecycle; the SM event delivery is then a second event in your terminology. SM recognises this sequence and abstracts it into an event that has a propogation time. The SM event can carry data (like an object) but this cannot be read until the event is delivered. So as well as abstracting the event-object, SM also encapsulates it, and hides its content from all except its receiver. If you accept this mapping, then it is easy to envisage a mechanical translation from SM notation to your analysis notation. The mapping in the other direction is more tricky, because your notation appears, in this case, to be less abstract. So, in the various threads that are currently ongoing, all we are debating is the exact form of the abstraction. Is it your position that the SM event abstraction is inappropriate? If so, why? Which specific features of your notation are lost in the event abstraction? Perhaps we can add them :-). It is interesting to note that when I talk of implicit bridging of events, I am actually saying that we map an event in one domain to a set of objects and events in another domain. Of course, if both domains are in SM, then even the server-domain's events can be implicitly bridged to a still less abstract view. Eventually we would reach events that are implicitly mapped to a sychronous service, so we don't have an infinite regress of events being mapped to more events. Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: Oh No! Another real-world modeling thread! (was RE: (SMU) Anonymo David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- > I believe that one of the principle benefits of OT, > particularly at the OOA level, is that one solves > the problem using models that are closely related to > the problem space. Thus most objects in an OOA > represent real entities (concrete or conceptual) in > the problem space. > The assertion that OT provides better maintainability > rests on the notion that the more closely objects > reflect the problem space, the less likely it will be > that modifications to the model will be more complex > than the modifications to the problem space that led to > the maintenance. I'm afraid I disagree with this (I'm a fully payed up member of the craftite conspiracy :-)). The better maintainability is a result of the way that the problem is described, not the fact that the problem is described. A model is simply a subjective view of reality. When we talk about a change in the problem domain, we are actually talking about a change in a different subjective view of the problem. (sure, the 'real' problem may also have changed, but our knowledge of that change is always subjective). The key to maintainability is in getting the right balance of cohesion within things, and decoupling between them. The dual view to this is we want to inherit apropriate locality from our subjective observations of the problem into our model. When you first start analysing a problem, all you have is your original, subjective, view of the problem. If analysis were simply a 1:1 mapping of this view into a tool then life would be very easy. Unfortunaly, such a model would not be maintainable. Analysis requires us to find the correct subjective view of the problem. This requires to to remove extraneous features from the original view, but you also impose artifacts of the modeling formalism into the problem description in order to get a model with appropriatly decoupled, cohesive, things. Dave. p.s. can we please take this thread to its "deleted" state. It seems impossible to kill it, but perhaps we can banish it back to comp.object, or OTUG, or anywhere other than here! -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: Re: (SMU) Anonymous event notification lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > > > This is *not* a model-to-model interaction. > > > > Sure it is. It is a special case where both models and > > both state machines happen to be the same. > > That is semantic quibbling. I don't think it is quibble at at all because... > Actually, I'm arguing that self-directed events are a > special case where the destination should be part of > the action's generator process. I sense this debate > going full circle :-(. > > Furthermore, I wish to make the distinction between > events which are explicitly send to self, and those > that are coincidently sent to self. This latter > category includes events routed along reflexive > relationships and along longer relationship loops. The event is sent to self because the analyst needs to do so to solve the larger problem of the domain mission. The state machine should not know about this. To me this is very fundamental to FSMs. The state model diagram describes states and the transitions among them. It says nothing about how those transitions come about. And... > I would be happy to replace a "generate self-directed event (Gm)" > process with a "do transition (Tj)" process. (aka "goto") This would > eliminate the confusion: a self > directed event is not an event: it is an abuse of the > event mechanism within SM that forces a specific > transition. So lets not call it an event. This is the core of our disagreement. Events are different than transitions. Transitions should represent the intrinsic properties of the object behavior without external context. The analyst provides events to describe the dynamic behavior of the _domain_. In doing so the analyst associates those events with transitions and identifies the sources of those events in a manner that will ensure correct overall domain behavior. In the same vein, the methodology imposes certain rules on the handling of events. One is that two events between the same two instances must be delivered in the same order that they were transmitted. Another is that events between the same instance must be delivered before events from other other instances. I see no substantive difference between these rules; in fact the second is a specialization of the first. If prioritizing self directed events is an abuse, then so is the first rule. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Hold event after transistion lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Riemenschneider... > >> I disagree about there only being one (instance that is). > > > >I agree with Simonson's discussion on this. For my part, the fact that the supertype and > >subtype have identical identifiers means that only one can be instantiated outside an > >action or else Normal Form will be violated. > > > Maybe I'm missing something, but are you saying that only one instance of a > supertype-subtype can exist at a time? I would think you could have an > instance of supertype-subtypeA and an instance of supertype-subtypeB at the > same time. Maybe I'm wrong. What I am saying is that there are not separate instances of a subtype AND its parent supertype. There can be any number of various subtypes' leaf instances, but they are all leaf instances and each inherits its set of attributes from it parent(s). The supertypes do not have an existence of their own, though a number of CASE tools do implement them separately. Put another way, there can only be one instantiation with a particular identifier value. In the OIM the supertype and subtype share identifiers. If the supertype were instantiated, it would have an identical identifier value as that of the subtype, which would violate Normal Form. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Hold event after transistion lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Simonson... > >Your receive SDFD has to have a process to place an event on the queue in the domain. All I am suggesting is that the process be invoked twice, first with E29 and then with En. > > >If the SDFD is hard-wired in your architecture, then you need another architecture. B-) For us this is a common thing to have to do when domain interfaces are mismatched, regardless of issues about migration, etc. within the domain. > > If there is a one-to-one mapping, a receive SDFD may not be needed. If the Client domain has a wormhole with an async return (double wall bubble with an event output) then there need not be a receive SDFD. The wormhole defines the return event destination and the data to place on that event (which can be a combination of data from the local and remote sides of the wormhole). Therefore, to get 20 external events which indicate things happened, I need 20 wormholes and no receive SDFDs. If however, I need to modify the return, the wormhole event has to be intercepted by a receive SDFD which can do things like modify parameters throw extra events etc., but now I have my 20 extra states back. We seem to be talking past one another here. Let me review my understanding of the problem. Some other domain is sending some number of events E30-E50 that the bridge directs to an instance. The bridge has a 'receiving SDFD' that places the E30-E50 event on the queue and performs any housekeeping required to accommodate an asynchronous return event, if any. The instance must process these events as a type C subtype. If it is not a type C subtype, it must migrate first to type C and then process the event. My solution is that whenever the receiving domain encounters any E30-E50 event that the bridge SDFD should have two order-dependent processes. The first pace E29 on the queue that is ignored if the instance is already type C but causes a transition to a type C if it is not. The second is the original E30-50 event, which will always be processed by the instance as a type C. I don't see any extra states and only one extra SDFD process here. Nor do I see any dependence on the return event. That would be processed exactly the same way in both the receiving SDFD and the return bridge invoked when the response event is generated. > Back to the issue: > > We've gone off on a detailed discussion of the specific analysis of the model relating to an example. I am belaboring it because I am not convinced that one needs a combinatorially large set of states/instances to deal with the problem that HOLD ostensibly resolves -- at least not for bridge events. > My real discussion point was really meant to be the philosophical issue of how do you decompose what exactly the STT entries mean if HOLD is allowed. Going back to the response tables, before HOLD it looked like: > > Monospaced font: > > Transition Exception > TRANSITION true false > IGNORED false false > CAN"T HAPPEN false true > 1 ??? true true > > Events are always consumed. Let me preface by saying I am talking about the OOA97 version of HOLD below... Yes, but I think the issue is when. They are consumed by the instance's state machine, which isn't until they are released from the event queue. Between the time they are generated and the time they are released from the event queue, they simply exist. The HOLD just defines when the event may be released from the queue _to be consumed_. So I think your point is only relevant in two situations : There is a HOLD for the event in every state of the FSM. In this case the event could never be consumed, so it is an analysis error. There is no way, at the time the event is placed on the queue, for the the FSM to transition to a state where the event is not HOLD, given the domain flow of control. I think this would require the event to be HOLD in at least two states. In any case, this would also be an analysis error. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) ESP Article (Kennedy-Carter's HOLD in State Transition lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Harris... Assuming the OOA97 implementation of HOLD... > I would argue that the guaranteed event ordering is not affected by the > use of HOLD. The order that events are made available to the receiving > instance is maintained. The HOLD allows the receiving instance to > choose the order that it removes the events from the queue. The question > may then become does the ordering rule apply to the transportation > mechanism of the events between instances or the order that the events > must be consumed? My interpretation is the former although the rule was > written prior to the suggestion of the HOLD so clarification may be required. I believe the ordering rule applies to the order in which they are consumed (i.e., delivered to the state machine). Ignoring HOLD for the moment, there is an indeterminate delay between the time an event is generated and the time it is consumed. I don't see any distinction between transmission delays and delays incurred because the queue has 0-N events already queued when the queue manager receives the event. The number of events on the already queue is just as unpredictable as the transmission delays to the queue, so one has to design the FSM interactions based upon a single, arbitrary delay between generation and consumption. I would also argue that the queue manager is an architectural artifact whose implementation should be transparent to the OOA, further supporting the idea that ordering applies to consumption rather than transmission to the queue. So I think that HOLD does, in fact, introduce an exception to the ordering rule for events between two instances, or even self-directed events. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Hold event after transistion Bary D Hogan writes to shlaer-mellor-users: -------------------------------------------------------------------- This message appears to have come directly to me from Lahman... I'm forwarding it to the group. ------------- Begin Forwarded Message ------------- Date: Sun, 21 Nov 1999 12:25:37 -0500 From: lahman Subject: Re: (SMU) Hold event after transistion To: Bary D Hogan MIME-version: 1.0 Content-transfer-encoding: 7BIT X-Accept-Language: en Responding to Hogan... > > Gregory Rochford writes to shlaer-mellor-users: > > > > If an _instance_ with pending events is deleted, I would consider that > > an > > analysis error. > > I disagree with this statement. It is possible for several events to > be pending to an instance and the first one accepted could cause that > instance to be deleted. Other events, at that point are meaningless. I am with Rochford on this, with some qualifications. It is up to the analyst to recognize that events might continue to be directed at or pending for an instance. If so, the analyst must introduce a mechanism to handle those events gracefully when the instance is deleted. One solution is the Dead state proposed elsewhere. That state could respond properly to the event (ignore it, generate a fatal error, send a response event, etc.). I think I would prefer to handle this in the architecture, mainly because the situation most commonly arises with domain external events in distributed systems. In that case the architecture probably already has protocols for handling such things. So I would prefer to colorize the delete state so that the architecture can do its thing for any pending or new events for the instance. (Note that the architecture might well introduce its own Dead state to do this.) However, a very strong caveat in doing such colorizing is that the analyst has determined that the architecture's mechanism is the correct one _in all possible cases_. If not, then the analyst has more work to do in the domain prior to deleting the instance. Another consideration is how important this is in the problem space. For example, if the problem space says that some other domain is going to continue to send events until told not to do so, I would do that (send the terminating event) explicitly in the domain. At the same time I would probably still colorize to clear out any pending events in the queue (assuming it is acceptable to the sender to have pending events ignored once the terminating event is sent). That is, I see terminating communications as an analysis issue while I see cleaning out the events in the queue as more of an implementation issue that implements the protocol for terminating communications. Admittedly, a touchy-feely judgment call that depends upon the particular situation. B-) -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com ------------- End Forwarded Message ------------- Subject: Re: (SMU) Hold event after transistion Bary D Hogan writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lahman... > > Responding to Hogan... > > > > Gregory Rochford writes to shlaer-mellor-users: > > > > > > If an _instance_ with pending events is deleted, I would consider that > > > an > > > analysis error. > > > > I disagree with this statement. It is possible for several events to > > be pending to an instance and the first one accepted could cause that > > instance to be deleted. Other events, at that point are meaningless. > > I am with Rochford on this, with some qualifications. It is up to the > analyst to recognize that events might continue to be directed at or pending > for an instance. If so, the analyst must introduce a mechanism to handle > those events gracefully when the instance is deleted. One solution is the > Dead state proposed elsewhere. That state could respond properly to the event > (ignore it, generate a fatal error, send a response event, etc.). > My main point was: If I assume that the analysis is correct, and the analysis says to delete the instance, then any events pending to that instance should be of no consequence. If they are of consequence, then then analysis needs more work. Possibly a dead state or something else. In other words, it is not _necessarily_ an analysis error for this situation to happen (which was why I disagreed with Rochford on that point). An example: I have a Symbol object that sets a timer. When the timer expires and the timer event is received, then the symbol starts flashing. An event that deletes the instance of symbol can be received anytime. When the delete event is received, the symbol is removed from the display and the instance is deleted. So if the symbol instance receives the "delete" event and there is a "timer expired" event pending, then it just doesn't matter. That event can be ignored. With that said, I do acknowledge that there probably are situations in which an error should be raised, or some other action taken when an event is pending to a non-existent instance. I just don't see how to do that in the analysis with the method as is (someone enlighten me). It probably has to be done as you have suggested below... > I think I would prefer to handle this in the architecture, mainly because the > situation most commonly arises with domain external events in distributed > systems. In that case the architecture probably already has protocols for > handling such things. So I would prefer to colorize the delete state so that > the architecture can do its thing for any pending or new events for the > instance. (Note that the architecture might well introduce its own Dead state > to do this.) > > However, a very strong caveat in doing such colorizing is that the analyst has > determined that the architecture's mechanism is the correct one _in all > possible cases_. If not, then the analyst has more work to do in the domain > prior to deleting the instance. Another consideration is how important this > is in the problem space. > > For example, if the problem space says that some other domain is going to > continue to send events until told not to do so, I would do that (send the > terminating event) explicitly in the domain. At the same time I would > probably still colorize to clear out any pending events in the queue (assuming > it is acceptable to the sender to have pending events ignored once the > terminating event is sent). That is, I see terminating communications as an > analysis issue while I see cleaning out the events in the queue as more of an > implementation issue that implements the protocol for terminating > communications. Admittedly, a touchy-feely judgment call that depends upon > the particular situation. B-) > Subject: Re: (SMU) Anonymous event notification lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Rochford... > I don't disagree with anything you say. I would use the OCM as a tool to > "discover" the intrinsic problem space dynamics. I do too. But not at a detailed level. If I define the OCM to the specific event and data packet level, then I am completely defining the flow of control before ever looking at the intrinsic behavior of the objects. That will necessarily determine specific states and transitions in the state models based upon my view of the best way to solve _the domain problem in hand_. If I focus on the details of the OCM I see substantial risk that the individual state models will be driven by a particular solution -- a classical problem of procedural programming that OT was supposed to solve. > From the description of > your process, it seems like the analyst must intuite what the behavior of > an object is *in isolation from* other objects in the domain. Absolutely. In my view this is exactly what OT is about. > That seems to > > a) place too much responsibility on the analyst's intuition. No, I think it places responsibility on the analyst for identifying the behavior of problem space entities. More importantly, it helps that analyst focus on the invariant behavior of the entity that is common across many applications or specific solutions. While S-M objects may not be particularly reusable across applications, they should be reusable across application enhancements. Correctly modeling the invariant behavior has got to help in this. > b) ignore object interaction until later (too late IMO) in the process. I didn't say that at all. I advocated doing a high level of control and data flow analysis to establish overall requirements on individual objects. This is part of determining the internal level of abstraction for the domain. (E.g., two domains could each have an object representing the same problem space entity but the state models would be quite different due to differing levels of abstractions and views of requirements in the respective domains). I simply want to express the object context in terms of high level requirements rather than as low level data and control flows that are dependent upon a specific solution. One could make an OCM for this so long as it merely described the sorts of data and kinds of communications. One can also use informal use cases for this, which happens to be what we often do. In addition, I also said that one iterates over the state models and the OCM. One always does this anyway; my issue is about which one to detailed analysis on first. I simply see less risk of procedural bias by starting with the state models. > I can imagine scenarios where two analysts are building the state > models for two different objects, and each thought the other was > going to take care of some requirement (or they both do it). This is one reason why we tend to keep domains small and have one analyst do the models (subject to peer review). B-) However, if one is going to have multiple analysts working on a domain's models, then one has to be more formal about defining the requirements on individual objects' behavior. This formality adds overhead but the process is identical to what the individual analyst doing the domain would do. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) ESP Article (Kennedy-Carter's HOLD in State Transition lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > Here's a more accurate algorithm: > > while (method.exists()) > { > method.useIt(); > if (me.frustrated() || me.bored()) > { > method.changeIt(); > } > } There is always room for refinement, in this case in the interest of simplification to essentials: while (method.mentioned()) { if (me.gotTooMuchTimeOnHands()) { method.changeIt(); } } -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: Oh No! Another real-world modeling thread! (was RE: (SMU) lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > > I believe that one of the principle benefits of OT, > > particularly at the OOA level, is that one solves > > the problem using models that are closely related to > > the problem space. Thus most objects in an OOA > > represent real entities (concrete or conceptual) in > > the problem space. > > > The assertion that OT provides better maintainability > > rests on the notion that the more closely objects > > reflect the problem space, the less likely it will be > > that modifications to the model will be more complex > > than the modifications to the problem space that led to > > the maintenance. > > I'm afraid I disagree with this (I'm a fully payed up > member of the craftite conspiracy :-)). > > The better maintainability is a result of the way that > the problem is described, not the fact that the > problem is described. Is there an echo in here? I thought that was what I just said. > The key to maintainability is in getting the right > balance of cohesion within things, and decoupling > between them. The dual view to this is we want to > inherit apropriate locality from our subjective > observations of the problem into our model. > > When you first start analysing a problem, all you > have is your original, subjective, view of the > problem. If analysis were simply a 1:1 mapping of > this view into a tool then life would be very easy. > Unfortunaly, such a model would not be maintainable. > > Analysis requires us to find the correct subjective > view of the problem. This requires to to remove > extraneous features from the original view, but you > also impose artifacts of the modeling formalism into > the problem description in order to get a model with > appropriatly decoupled, cohesive, things. It seems to me that you are agreeing with me. The methodology provides constructs to promote decoupling, such as bridging and pure message based communications. It also provides a paradigm for cohesion through maintaining different subject matters and levels of abstraction in domains. That, in turn, supports finding the correct view of the problem by defining the correct abstractions. The only substantive difference I see here is the notion that the view is subjective. While that is a potential pitfall in any human endeavor, I would argue that the *goal* is to provide an objective view. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Oh No! Another real-world modeling thread! lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... My keyboard locked up (the old enabling JavaScript script on NT trick) so I didn't get to add this to the previous message. > p.s. can we please take this thread to its "deleted" > state. It seems impossible to kill it, but perhaps > we can banish it back to comp.object, or OTUG, or > anywhere other than here! While (forum.isThere() { if (thread.mentioned()) { me.gottaTalkAboutIt(); } } -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Anonymous event notification Gregory Rochford writes to shlaer-mellor-users: -------------------------------------------------------------------- I think we're agreeing in principle, if not fact. lahman wrote: > > lahman writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Responding to Rochford... > > > I don't disagree with anything you say. I would use the OCM as a tool to > > "discover" the intrinsic problem space dynamics. > > I do too. But not at a detailed level. If I define the OCM to the specific event and Well, I didn't specify the level of detail in the preliminary OCM :) But the intent was just enough to the layering right so state models could be built. > data packet level, then I am completely defining the flow of control before ever looking > at the intrinsic behavior of the objects. That will necessarily determine specific > states and transitions in the state models based upon my view of the best way to solve > _the domain problem in hand_. If I focus on the details of the OCM I see substantial > risk that the individual state models will be driven by a particular solution -- a > classical problem of procedural programming that OT was supposed to solve. > > > From the description of > > your process, it seems like the analyst must intuite what the behavior of > > an object is *in isolation from* other objects in the domain. > > Absolutely. In my view this is exactly what OT is about. > Just to be sure we mean the same thing: Isolation does not mean ignoring collaborations with other objects, correct? /snip -- mostly agreement / > > > I can imagine scenarios where two analysts are building the state > > models for two different objects, and each thought the other was > > going to take care of some requirement (or they both do it). > > This is one reason why we tend to keep domains small and have one analyst do the models > (subject to peer review). B-) However, if one is going to have multiple analysts > working on a domain's models, then one has to be more formal about defining the > requirements on individual objects' behavior. This formality adds overhead but the > process is identical to what the individual analyst doing the domain would do. > Two analysts at the same time, or one analyst six months later. Both situations require the same level of documentation :) best gr Subject: Re: (SMU) Hold event after transistion "Dana Simonson" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to lahman... >> My real discussion point was really meant to be the philosophical issue of how do you decompose what exactly the STT entries mean if HOLD is allowed. Going back to the response tables, before HOLD it looked like: >> >> Monospaced font: >> >> Transition Exception >> TRANSITION true false >> IGNORED false false >> CAN"T HAPPEN false true >> 1 ??? true true >> >> Events are always consumed. > >Let me preface by saying I am talking about the OOA97 version of HOLD below... > >Yes, but I think the issue is when. They are consumed by the instance's state machine, which isn't until they are released from the event queue. Between the time they are generated and the time they are released from the event queue, they simply exist. The HOLD just defines when the event may be released from the queue _to be consumed_. So I think your point is only relevant in two situations : > >There is a HOLD for the event in every state of the FSM. In this case the event could never be consumed, so it is an analysis error. > >There is no way, at the time the event is placed on the queue, for the FSM to transition to a state where the event is not HOLD, given the domain flow of control. I think this would require the event to be HOLD in at least two states. In any case, this would also be an analysis error. We may be at an impasse. I am not talking about either of these cases. My contention is that at the point in time when an object first sees an event (finished processing last event and is now looking at the next event in the queue) it must decide what to do with it. Under OOA96 rules it can 1) Consume event, transition to a new state and run that states action. 2) Consume event, do not transition and do not run any action. 3) Post error condition in that event cannot happen here. Transition and action behavior is not defined. I believe that HOLD adds the following: 4) Do not consume event, do not transition, do not run any action. The event will remain on the queue until some other event moves this object to another state. At that time we will look at it again and go through the same decision process. I am maintaining that a complete analysis of these choices add several options in error handling, plus it adds the possibility of 5) Do not consume event, transition to new state, run its action. When I refer to an event not being consumed, I am talking only about that particular juncture where the event is evaluated for the current state to determine what to do with it. Eventually, in some future state, all events need to have the possibility of being consumed at the analysis level. I believe that if we extend the STT to allow HOLD, then we should consider all of the associated implications of that action. I think we have the following: ----------------- | STT Juncture | |*event | |*state | |next state | |[consume] | |error | ----------------- With an action something akin to if (error) deal with it [if (consume)] remove from queue if (next state) do state action (next state) The items in [ ] being added because of the addition of HOLD. As is probably evident, I feel that if an attribute contains more than something or nothing then it is really two attributes. The STT, per SM, can be next state, ignore or can't happen. To me this is a problem. Ignore can be simply the nothing case. Can't Happen then is left out in the cold since it is neither something nor nothing, but rather something else. Therefore, the addition of the error attribute. Now, if we add HOLD, it is neither error nor next state. It must therefore require another attribute. Hence, the addition of consume. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Dana Simonson Section Manager, Engineering Waseca Operations Center E. F. Johnson Company dsimonson@efjohnson.com www.efjohnson.com Subject: Re: (SMU) Hold event after transistion lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Hogan... Somehow the ReplyTo is not getting set to SMUG for your messages. So when I do a simple Reply rather than ReplyAll I get the first name in the list, which is your personal return. Almost happened again on this message. > My main point was: If I assume that the analysis is correct, and the > analysis says to delete the instance, then any events pending to that > instance should be of no consequence. If they are of consequence, > then then analysis needs more work. Possibly a dead state or > something else. In other words, it is not _necessarily_ an analysis > error for this situation to happen (which was why I disagreed with > Rochford on that point). OK, but I would argue this approach makes an assumption about how the architecture is going to handle the 'no consequence' situation. Some will crash with varying degrees of gracefulness. B-) If you architecture doesn't do anything rude, then you have effectively colorized the delete state to handle this. I agree, though, that this approach is probably a reasonable default that should be overridden by colorization in those cases where it is not appropriate. In most situations I've conjured up mentally the only issue is whether the source of the events has a need to know that they are being ignored. But it's not in the methodology yet as a formalism akin to interleaved vs. simultaneous views of time. > With that said, I do acknowledge that there probably are situations in > which an error should be raised, or some other action taken when an > event is pending to a non-existent instance. I just don't see how to > do that in the analysis with the method as is (someone enlighten me). > It probably has to be done as you have suggested below... I think an architectural solution is only *required* if there is no way to prevent the events from being sent (i.e., they originate outside the domain) and one does not care for the notion of permanently resident deleted instances. If the events are generated internal to the domain then I believe there should be some way of shutting them off to ensure that eventually the queue is exhausted. One scheme for doing this might be: +--------+ | |E2 v | +---------+ +-----------+ | +----------+ | S1 | E2 | S2 |--+ E3 |S3 | |gen E2 |----------->| |--------->| | |gen E1 | | hang out | | delete | +---------+ +-----------+ +----------+ In this case state S1 generates an event E1:'I Don't Want To Hear Anymore About It' to the instance that is generating the E2 events that may be on the queue. It will probably also want to generate an E2 event to get the state S2. The S2 state is a step stone to deletion that simply ignores the E2 events that may be on the queue. When the source instance gets the E1 it will stop sending E2 events and it will generate an E3:'I Hear Ya, Bubba' event. That event causes the transition to the real suicidal state. This sort of pattern works fine in simple cases, but it starts getting even more complicated when there are several possible sources for the E2 events. It also doesn't work well when the current instance may be re-created quickly. These problems can be overcome with some of the solutions suggested by others, but these add even more complexity to the domain. We have rarely run into this problem, but when we have it has been relatively easy to rationalize it as an architectural problem for ensuring consistency around instance deletions. One way to view it is as a special case of CAN'T HAPPEN. The architecture must always supply some mechanism for dealing with CAN'T HAPPEN events and there are a lot of choices. So it comes down to specifying the implementation's error handling mechanism via colorization. [OTOH, I would prefer to see more explicit descriptions of error handling in the OOA, but that's another story...] -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: RE: (SMU) Anonymous event notification David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > This is the core of our disagreement. Events are different than > transitions. Transitions should represent the intrinsic properties of > the object behavior without external context. The analyst provides > events to describe the dynamic behavior of the _domain_. In doing so > the analyst associates those events with transitions and > identifies the sources of those events in a manner that will ensure > correct overall domain behavior. I agree with every sentence. I have no problems with the definition of events, nor with the definition of transitions nor states: only with applying it to the intent of expediated self-directed events. IMHO, the self-directed event describes a dynamic of the _object_, not the _domain_. It is this statement that is out disagrement. If I could convince you of this then, by the definitions you quote, a self-directed event is not an event. > In the same vein, the methodology imposes certain rules on > the handling of events. One is that two events between the > same two instances must be delivered in the same order that > they were transmitted. Another is that events between the > same instance must be delivered before events from other > other instances. I see no substantive difference between > these rules; in fact the second is a specialization of the > first. I do not understand how you claim the specialization. The two rules do not depend on each other. The first is a statement about events between any, and every, _pair_ of instances in the system (the pair could be the same instance twice). The second rule is a statement which, for each and every instance, makes a distinction between self and not-self. This distinction defines the event's context to be the instance's self. All other instances are "not-self", and are excluded, thus excluding the concept of a pair. But all this is getting technical. The simplest reason is that explicitly self directed events on the OCM are not aesthetically pleasing (but perhaps beauty is in the eye of the beholder). Loops back to the sender on the OCM should be restricted to reflexive and looped relationships. The sender/receiver on the OCM is an object (prototypical instance), not an instance. Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: Re: (SMU) Anonymous event notification lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > IMHO, the self-directed event describes a dynamic of the _object_, > not the _domain_. It is this statement that is out disagrement. > If I could convince you of this then, by the definitions you quote, > a self-directed event is not an event. That convincing will be a challenge. If it walks like a duck and quacks like a duck and everyone calls it a duck... > I do not understand how you claim the specialization. The > two rules do not depend on each other. The first is a > statement about events between any, and every, _pair_ of > instances in the system (the pair could be the same > instance twice). The second rule is a statement which, for > each and every instance, makes a distinction between self > and not-self. This distinction defines the event's context > to be the instance's self. All other instances are "not-self", > and are excluded, thus excluding the concept of a pair. Both rules deal with the ordering of events. The first rule represents a generalization that does not depend upon the particular instances involved. The second rule refines the first rule for particular pairs of instances. Therefore it is a specialization. The fact that what makes those pairs of instances particular implies a notion of self is simply a criteria for when the specialization applies. Awhile back we discussed a notational refinement for broadcast events across 1:M relationships. Assuming such a refinement were adopted, one could also define a rule for the ordering of those events. Would you argue that rule is not a specialization as well? That the events describe the dynamic of the relationship rather than the domain, so they don't belong on the OCM? > But all this is getting technical. The simplest reason is > that explicitly self directed events on the OCM are not > aesthetically pleasing (but perhaps beauty is in the eye > of the beholder). Loops back to the sender on the OCM > should be restricted to reflexive and looped relationships. > The sender/receiver on the OCM is an object (prototypical > instance), not an instance. As I indicated before, I think there would be substantial benefit in seeing self directed events on the OCM. Assuming good naming conventions were used one could follow the entire flow of control of the domain from that single diagram. As to the last sentence, how is that different than an event associated with a transition on a state model? State model : state machine :: object : instance. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Anonymous event notification lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Rochford... > > > From the description of > > > your process, it seems like the analyst must intuite what the behavior of > > > an object is *in isolation from* other objects in the domain. > > > > Absolutely. In my view this is exactly what OT is about. > > > > Just to be sure we mean the same thing: > > Isolation does not mean ignoring collaborations with other objects, > correct? Yes and no. One can define objects in two domains that are abstractions of the same problem space entity. S-M insists that those abstractions be significantly different, reflecting the differing abstraction and subject matter of the domains. So their state models will be quite different. This clearly demonstrates the notion that objects' dynamic behavior can be different in different contexts. It is undeniable that the context is driving those differences. Where we appear to differ is in the definition of 'collaborating'. My assertion is that one should _initially_ attempt to develop state models independently. The collaboration comes in when defining the requirements on those state models that guides that independent development. One can gather those requirements from a high level OCM, use cases, and the overall knowledge of the abstraction garnered when developing the OIM. I see this as a much higher level of collaboration description than the you-do-this-then-I'll-do-that collaboration of a detailed OCM. > > This is one reason why we tend to keep domains small and have one analyst do the models > > (subject to peer review). B-) However, if one is going to have multiple analysts > > working on a domain's models, then one has to be more formal about defining the > > requirements on individual objects' behavior. This formality adds overhead but the > > process is identical to what the individual analyst doing the domain would do. > > > > Two analysts at the same time, or one analyst six months later. Both situations > require the same level of documentation :) I disagree here. The documentation of the requirements flow between objects within a domain is only needed during the development of the state models. Once those models are developed they have their own documentation (event descriptions, state descriptions, action language, OCM, etc.) that describes the resolution of those requirements. That documentation had better be sufficient or S-M's claims to being an unambiguous description are out the window. If you have a single developer, that intermediate documentation of requirements can reside on whiteboards and cocktail napkins. But if you have multiple developers you need more persistence and clarity of expression for that documentation because the developers must exchange ideas *before* the unambiguous models are created. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Anonymous event notification Gregory Rochford writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > > lahman writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Responding to Rochford... > > > > > From the description of > > > > your process, it seems like the analyst must intuite what the behavior of > > > > an object is *in isolation from* other objects in the domain. > > > > > > Absolutely. In my view this is exactly what OT is about. > > > > > > > Just to be sure we mean the same thing: > > > > Isolation does not mean ignoring collaborations with other objects, > > correct? > > Yes and no. One can define objects in two domains that are abstractions of the same problem > space entity. S-M insists that those abstractions be significantly different, reflecting the > differing abstraction and subject matter of the domains. So their state models will be quite > different. This clearly demonstrates the notion that objects' dynamic behavior can be > different in different contexts. It is undeniable that the context is driving those > differences. > Which is it, yes or no? For the case of collaboration between (analysis) objects in the same domain. I would think for this specific case you could give a simple answer :) > Where we appear to differ is in the definition of 'collaborating'. My assertion is that one > should _initially_ attempt to develop state models independently. The collaboration comes in > when defining the requirements on those state models that guides that independent > development. One can gather those requirements from a high level OCM, use cases, and the > overall knowledge of the abstraction garnered when developing the OIM. I see this as a much > higher level of collaboration description than the you-do-this-then-I'll-do-that collaboration > of a detailed OCM. > You must have missed this comment I put in the message you're now replying to: lahman wrote: > I do too. But not at a detailed level. If I define the OCM to the specific event and Well, I didn't specify the level of detail in the preliminary OCM :) But the intent was just enough to the layering right so state models could be built. gr Subject: Re: (SMU) Hold event after transistion lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Simonson... > We may be at an impasse. I am not talking about either of these cases. My contention is that at the point in time when an object first sees an event (finished processing last event and is now looking at the next event in the queue) it must decide what to do with it. Aha. I believe this is the basis for our difference. I do not think an instance sees an event until the event queue has released it for consumption. At that point, it will always be consumed. > Under OOA96 rules it can > 1) Consume event, transition to a new state and run that states action. > 2) Consume event, do not transition and do not run any action. > 3) Post error condition in that event cannot happen here. Transition and action behavior is not defined. > > I believe that HOLD adds the following: > 4) Do not consume event, do not transition, do not run any action. The event will remain on the queue until some other event moves this object to another state. At that time we will look at it again and go through the same decision process. While I see the addition being to the queue manager's logic: 4) Do not release event to state machine if instance is in state X. In effect the architecture is inserting an additional delay between the time the event is generated and the time it is presented to the state machine that happens to depend upon the state of the target instance. But the behavior of the state machine itself is exactly the same. For example, if you recall Wilkie's pseudocode, that decision was made by the queue manager prior to processing the event according to the STT specification. One problem with HOLD is that it is an STT entry rather than a colorization of the event itself. This implies that the state machine is making the decision to accept an event when, in fact, the architecture is making a decision to delay an event. This is part of what I meant very early in this thread when I said that HOLD changes the nature of the STT. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Anonymous event notification lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Rochford... > You must have missed this comment I put in the message you're now > replying to: > > lahman wrote: > > I do too. But not at a detailed level. If I define the OCM to the specific event and > > Well, I didn't specify the level of detail in the preliminary OCM :) > But the intent was just enough to the layering right so state models > could be built. Then we are agreed. I am just surprised because I thought PT advocated _completing_ the OCM before _starting_ the SMs. Of course I haven't taken a PT class in a decade, so that impression may be a tad out of date. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Anonymous event notification Gregory Rochford writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > > lahman writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Responding to Rochford... > > > You must have missed this comment I put in the message you're now > > replying to: > > > > lahman wrote: > > > I do too. But not at a detailed level. If I define the OCM to the specific event and > > > > Well, I didn't specify the level of detail in the preliminary OCM :) > > But the intent was just enough to the layering right so state models > > could be built. > > Then we are agreed. I am just surprised because I thought PT advocated _completing_ the OCM before > _starting_ the SMs. Of course I haven't taken a PT class in a decade, so that impression may be a > tad out of date. > Well, it's been a long time since I took a class too :) (or taught one for that matter). So you agree that the analyst must consider object interactions when building the state model? (That part of the last message seems to have been lost in transmission, the part that wanted a simple answer :) The usual disclaimer (in case anyone thought otherwise): These are my opinions, not those of Project Technology. Subject: RE: (SMU) is a self directed event special (was: Anonymous event David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Lahman wrote > > IMHO, the self-directed event describes a dynamic of the _object_, > > not the _domain_. It is this statement that is out disagrement. > > If I could convince you of this then, by the definitions you quote, > > a self-directed event is not an event. > > That convincing will be a challenge. If it walks like a > duck and quacks like a duck and everyone calls it a duck... Maybe I can convince you that it looks like a duckling: perhaps an ugly one... :-) For the sake of argument, I'll drop my claim that its not an event for now. I'll try another way to suggest there's something different about it. You quite liked my idea of associating relationships with events. If someone was to take up that idea then they'd have to put it in the OOA-of-OOA. We might have: Event "is routed according to" 0,* Relationship_Endpoint A useful rule in analysis is to be wary of conditional relationships. They often indicate incomplete abstraction. If you have 2 objects, A and B, where "A 1:Mc B" then you should investigate the possibility that a more complete analysis would subtype A into 2 subtypes: A1 and A2 such that "A1 1:M B" and A2 is unrelated to B. Going back to the Event question, we might subtype event to Routed_Event "is routed according to" 1,* Relationship_Endpoint and Not_Routed_Event, which isn't. Not_Routed_Event is not a nice name, but we could actually find a set of objects to replace it: Assigner_Event, Creation_Event, Self_Directed_Event none of which are routing according to relationships (though "Assigner_Event" is related to "Relationship"). So Event might have 4 subtypes. You could replace "Relationship_Endpoint" with Relationship_Chain, but similar arguments would apply. Before I continue this line of argument, lets see if you can agree that, at the very least, it is an analyst's responsibility to investigate the possibility that such a subtyping might exist as an alternative to a conditional relationship. > Both rules deal with the ordering of events. The first rule > represents a generalization that does not depend upon the > particular instances involved. The second rule refines the > first rule for particular pairs of instances. Therefore it > is a specialization. The fact that what makes those pairs > of instances particular implies a notion of self is simply > a criteria for when the specialization applies. I cannot agree with your hierarchy. Its a bit like the old ellipse/circle debate. The fact there is a generalization does not imply that one is a specialization of the two. In this case, it seems likely that both rules are specialisations of "event-ordering rule" but are, themselves, unrelated. Both rule can exist, unmodified, in the absence of the other. > Awhile back we discussed a notational refinement for > broadcast events across 1:M relationships. Assuming such a > refinement were adopted, one could also define a rule for > the ordering of those events. Would you argue that rule > is not a specialization as well? That the events describe > the dynamic of the relationship rather than the domain, > so they don't belong on the OCM? I would argue that no extra rule is needed. The existing rule covers it. (the self-directed event rule does not apply because we know there is a relationship -- another reason to subtype the event?). The broadcast 1:M event is nothing new: its just a natural consequnce of moving routing onto the OCM. Currently and action gets a set of instances and passes the set to an event generator. The generation of a set of events follows the instance-to-instance ordering rule. If the routing is moved to events on the OCM, then the set lookup moves too. The rule needs no change. The addition of extra rules is undesirable -- and therefore needs a very good justification. If a rule was added, then it could be added as a specialization of the pair rule or as a specialization of event-ordering_rule. It could not be a specialization of the self-directed event rule. Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: Re: (SMU) Anonymous event notification lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Rochford... > So you agree that the analyst must consider object interactions > when building the state model? (That part of the last message > seems to have been lost in transmission, the part that wanted > a simple answer :) The difficulty is that there isn't a single, simple answer. The main problem is that both 'collaboration' and 'interaction' are overloaded terms and in the context of an OOA I believe they are most commonly associated with the detailed event flows of a completed OCM. In that case I would have to say No because I use requirements flows for the crucial initial design and I intentionally avoid that level of abstraction. OTOH, another problem is that it depends where one is in the design iteration. On the last pass one certainly has the detailed OCM events in hand. So the answer could be Yes because at the end of the design cycle I use event-level interactions. But that, though technically correct enough to satisfy a defense lawyer, would be thoroughly misleading. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) is a self directed event special (was: Anonymous event lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > Before I continue this line of argument, lets see if you > can agree that, at the very least, it is an analyst's > responsibility to investigate the possibility that such > a subtyping might exist as an alternative to a conditional > relationship. I agree. [With the qualification that there are broad classes of situations where conditionality is unavoidable.] > I cannot agree with your hierarchy. Its a bit like the old > ellipse/circle debate. The fact there is a generalization > does not imply that one is a specialization of the two. In > this case, it seems likely that both rules are specialisations > of "event-ordering rule" but are, themselves, unrelated. > Both rule can exist, unmodified, in the absence of the other. Even if I accept your thesis (which I don't because I believe the self-directed event rule is a specialization of the same-pair rule), each rule has an is-a relationship to 'event-ordering-rule'. That means that they are specializations of the same thing, which is exactly my point anyway. > > Awhile back we discussed a notational refinement for > > broadcast events across 1:M relationships. Assuming such a > > refinement were adopted, one could also define a rule for > > the ordering of those events. Would you argue that rule > > is not a specialization as well? That the events describe > > the dynamic of the relationship rather than the domain, > > so they don't belong on the OCM? > > I would argue that no extra rule is needed. The existing > rule covers it. (the self-directed event rule does not > apply because we know there is a relationship -- another > reason to subtype the event?). Holy Assumptions, Batman! How can you say that without knowing what the rule is or knowing what problem is being solved? I am hypothesizing that changing the notation to a single broadcast event across an M relationship will lead to a problem in modeling some arcane situation that requires a rule to modify the normal event ordering. Unlikely, perhaps, but not impossible. > If a rule was added, then it could be added as a > specialization of the pair rule or as a specialization > of event-ordering_rule. It could not be a specialization > of the self-directed event rule. Exactly! Just like the self-directed event rule. So if you think the self-directed event rule is somehow special enough to change its association from the domain OCM to the object state model, then the same logic should dictate that the broadcast rule is special enough to change its event association from the domain OCM to the OIM relationship. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: RE: RE: (SMU) Hold event after transistion "Leslie A. Munday" writes to shlaer-mellor-users: -------------------------------------------------------------------- On Fri, 19 Nov 1999 18:48:33 David.Whipp wrote: >David.Whipp@infineon.com writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >It seems to me that your inintial event is equivalent to >an SM event generator process; your event is an SM object >with a simple born-and-die lifecycle; the SM event >delivery is then a second event in your terminology. > >SM recognises this sequence and abstracts it into an >event that has a propogation time. The SM event can >carry data (like an object) but this cannot be read >until the event is delivered. So as well as abstracting >the event-object, SM also encapsulates it, and hides >its content from all except its receiver. > >If you accept this mapping, then it is easy to envisage >a mechanical translation from SM notation to your analysis >notation. The mapping in the other direction is more >tricky, because your notation appears, in this case, >to be less abstract. > >So, in the various threads that are currently ongoing, >all we are debating is the exact form of the abstraction. >Is it your position that the SM event abstraction is >inappropriate? If so, why? Which specific features of >your notation are lost in the event abstraction? Perhaps >we can add them :-). To answer your question I have to get to the basics and ask what is the purpose of analysis modeling? For me, I model a system specification in order to gain a complete and consistent set of requirements. The result of a model is a complete set of domains, objects, states, relationships, data flows, events and operations. Of these it is the operations which make up my requirements. Everything else is supporting information. States - indicate the precondition and postcondition of my requirement. Events - Are the stimulus for the requirement. DataFlows - Show external data used by the requirement. Domains, Objects and relationships are used to show the partitioning of the problem. Now any functionality which can be measured over time, forms a requirement of my system. Since events are not themselves a requirement, they cannot have time associated with them, else I'll be missing information when I come to write up my requirements. Also the fact that events are queued and prioritized will also have some impact on the requirements, and I don't know how to relate this information to a requirement of the form, Precondition->Stimulus->Action->Postcondition. It's all (mostly) explained in my short paper on requirements analysis. I think that the big difference between what I propose and what the S-M tools support is the difference between single-threaded and multi-threaded modeling. When I model, every instance is given its own thread of control. Events fly instantaneously between object instances and the receiving instances react immediately to the reception of an event. No queueing or prioritizing of events is necessary. Now if I tried this (which I am in the process of doing) with a tool that supports only a single thread of control. I.e. when an instance sends an event to another instance, the current instance becomes paused, and the receiving instance takes over the control of the system, things get rather complicated. When my currently running instance reaches the end of its current state processing (waiting for an event) where does the thread of control return to? I think I'm right in saying that S-M simulation avoids these sorts of problems having states execute to completion, and queueing events. You may politely tell me that I'm wrong if you wish, but this is the difference between what I do and what S-M tools support, which is one of the reasons why I think S-M tools are not so appropriate for analysis. Ideally I would want a toolset which is a combination of a UML builder and a S-M simulator and a translation engine between the two. As it is, I model and document in Rose, then manually translate into my 'borrowed' copy of BridgePoint. Which leads me to the question: How do I simulate the interruption of the current action and immediate processing of a received event by an instance, using a S-M simulation tool? Leslie. __________________________________________________________________ Get your own free England E-mail address at http://www.england.com Subject: Re: (SMU) Hold event after transistion lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Munday... > The result of a model is a complete set of domains, objects, states, relationships, data flows, events and operations. > > Of these it is the operations which make up my requirements. Everything else is supporting information. > > States - indicate the precondition and postcondition of my requirement. > Events - Are the stimulus for the requirement. > DataFlows - Show external data used by the requirement. > Domains, Objects and relationships are used to show the partitioning of the problem. > > Now any functionality which can be measured over time, forms a requirement of my system. Since events are not themselves a requirement, they cannot have time associated with them, else I'll be missing information when I come to write up my requirements. If I understand this correctly, then your 'operation' is an analog to a state model action but it is a quite different thing. (More below.) This seems to me to be a very different view of time than that of an S-M OOA. An S-M OOA captures time in the sequencing of events between instances so the correctness of an OOA depends hugely on correctly accounting for event delays. In your usage event transitions simply link post-conditions to pre-conditions for the state of the system to determine the context of a particular requirement while time seems to be only relevant if it is exlicitly identified in a requirement description. > Also the fact that events are queued and prioritized will also have some impact on the requirements, and I don't know how to relate this information to a requirement of the form, Precondition->Stimulus->Action->Postcondition. > > It's all (mostly) explained in my short paper on requirements analysis. > > I think that the big difference between what I propose and what the S-M tools support is the difference between single-threaded and multi-threaded modeling. > > When I model, every instance is given its own thread of control. Events fly instantaneously between object instances and the receiving instances react immediately to the reception of an event. > > No queueing or prioritizing of events is necessary. Because instances have persistent data their actions cannot be re-entrant if those actions update the instance data. Since an OOA almost always has some flow of control (i.e., event generation) based upon current values of attributes, an individual action could not accept a second event while it was processing, so one would need event queuing at least at this level. So this leads me to think that your view of an action as a requirement is quite literal. That is, it is a just a description of what must be done rather than an execution (albeit abstract) of what must be done. That is, you are using the models for a very literal minded requirements analysis (i.e., an organization of the statement of requirements) rather than an OOA (i.e., an abstract solution to the problem). > Now if I tried this (which I am in the process of doing) with a tool that supports only a single thread of control. I.e. when an instance sends an event to another instance, the current instance becomes paused, and the receiving instance takes over the control of the system, things get rather complicated. I would think so, since this is a _very_ synchronous architecture and S-M was designed for asynchronous analysis. B-) However, I don't think a more complex architecture would solve your problem. It seems to me that you have a different view of time, the nature of state model actions, and flow of control than S-M is geared to do. Baisically you are solving a different problem than that -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: RE: (SMU) is a self directed event special (was: Anonymous event "Lynch, Chris D. SDX" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp and Lahman >> (Whipp) I cannot agree with your hierarchy. Its a bit like the old >> ellipse/circle debate. The fact there is a generalization >> does not imply that one is a specialization of the two. In >> this case, it seems likely that both rules are specialisations >> of "event-ordering rule" but are, themselves, unrelated. >> Both rule can exist, unmodified, in the absence of the other. > >(Lahman) Even if I accept your thesis (which I don't because I believe the >self-directed event rule is a specialization of the same-pair rule), each >rule has an is-a relationship to 'event-ordering-rule'. That means that >they are specializations of the same thing, which is exactly my point >anyway. I think this argument is a distraction from your real goal, which appears to be to provide simpler event routing (via relationships.) But this diversion is just too fascinating, so... Self-directed events are a) special and b) not special. 99% of the self-directed events one encounters (type a) can be eliminated by use of the Mealy machine, which is allowed in the method. IMHO, the remaining 1% of internal events (type b) don't need special handling, (i.e., prioritization) because in a good model they will arrive when the model is in the correct state to handle them. Type (a) events say, "I am done--take me to my next state ASAP". All type (a) events look the same. They have no supplemental data. A state model may have a bunch of them, but within one STD they can all have the same event identifier. (I believe as Whipp does, that they are glorified GoTo's.) They must be prioritized over external events to work properly. Type (b) events are like those in the OL:MWS juice plant example (see event M4, p. 49, bottom). They carry domain-related information (unlike type (a) events.) Type (b) events should not need to be prioritized if the state model is reasonably conceived. This picture of events and event handling suggests to me that the distinction between internal and external events (particularly the priority of one over the other) are unnecessary in SMOOA. If it were up to me, "self-directed event" (and "internal event") would disappear from our collective vocabulary. -Chris --------------------------------------------------- Chris Lynch Abbott AIS San Diego CA lynchcd@hpd.abbott.com "If you're as clever as you can be when you design it, how will you ever debug it?" Kernighan and Plauger --------------------------------------------------- Subject: RE: (SMU) is a self directed event special (was: Anonymous event Erick.Hagstrom@envoy.com writes to shlaer-mellor-users: -------------------------------------------------------------------- >99% of the self-directed events one encounters (type a) can be eliminated >by use of the Mealy machine, which is allowed in the method. Elaborate please. a) When I learned the method--not long ago--I was taught that actions are executed upon entry into a state, not upon entry into a transition. Clearly a Moore machine, not Mealy. b) How does the use of a Mealy machine guarantee that certain (self-directed) events are given priority? Subject: RE: (SMU) is a self directed event special (was: Anonymous event "Nau, Peter" writes to shlaer-mellor-users: -------------------------------------------------------------------- Given that it is straightforward to transform a Mealy machine into a Moore machine and vice-versa, it seems unlikely that the type of machine has bearing on the need for self-directed events. There is another reason for needing self-directed events (which I can't remember at the moment, but I'm sure someone else can tell us). Mealy machines, per se, are not allowed in the method. > -----Original Message----- > From: Erick.Hagstrom@envoy.com [SMTP:Erick.Hagstrom@envoy.com] > Sent: Monday, November 29, 1999 1:07 PM > To: shlaer-mellor-users@projtech.com > Subject: RE: (SMU) is a self directed event special (was: Anonymous > event notification) > > Erick.Hagstrom@envoy.com writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > > >99% of the self-directed events one encounters (type a) can be eliminated > >by use of the Mealy machine, which is allowed in the method. > > Elaborate please. > a) When I learned the method--not long ago--I was taught that actions are > executed upon entry into a state, not upon entry into a transition. > Clearly a > Moore machine, not Mealy. > b) How does the use of a Mealy machine guarantee that certain > (self-directed) > events are given priority? > > > > > Subject: RE: (SMU) is a self directed event special (was: Anonymous event "Lynch, Chris D. SDX" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Hagstrom responding to me: ---------------------------------------------------- >>99% of the self-directed events one encounters (type a) can be eliminated >>by use of the Mealy machine, which is allowed in the method. > >Elaborate please. >a) When I learned the method--not long ago--I was taught that actions are >executed upon entry into a state, not upon entry into a transition. Clearly a >Moore machine, not Mealy. >b) How does the use of a Mealy machine guarantee that certain (self-directed) >events are given priority? I don't know what PT teaches now, but in the back of the course notes for the 1993 Recursive Design course there is an appendix discussing the equivalent power of the two types of automata and recommending, but _not requiring_, the Moore machine. It was recommended on the basis of there being fewer things to keep track of. Back then, however, I contacted PT to ask them about this, because we were finding that with our Moore state models, we were needing a lot of transient states followed by self-directed "I am done" events, which suggested to me that a Mealy machine would be simpler. I was told that their recommendation was solely on the basis of practicality, and their final advice was to use whichever style made the state models easier to understand (provided I could get support from the architecture.) As an experiment, one of our analysts spent an hour or two on some of our most complex state models, converting them to Mealy machines, and the improvement in clarity was striking. I have preferred the Mealy machine ever since. As for the "prioritization" of self-directed events under the Mealy machine, I was not claiming that Mealy prioritizes them; rather, it makes the self-directed, "I am done" events disappear from the model, so there is nothing to prioritize. For the events which are self-directed but _have meaning in the domain_, (type (b) in my earlier post) I was arguing that expedited handling of these is of doubtful necessity and that standard FIFO would suffice for most, if not all, problems. (An additional benefit to eliminating priority is that some architectural simplification occurs as well.) -Chris --------------------------------------------------- Chris Lynch Abbott AIS San Diego CA lynchcd@hpd.abbott.com "If you're as clever as you can be when you design it, how will you ever debug it?" Kernighan and Plauger --------------------------------------------------- Subject: RE: (SMU) is a self directed event special (was: Anonymous event "Lynch, Chris D. SDX" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Nau: >"Nau, Peter" writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >Given that it is straightforward to transform a Mealy machine into a Moore >machine and vice-versa, it seems unlikely that the type of machine has >bearing on the need for self-directed events. >There is another reason for needing self-directed events (which I can't >remember at the moment, but I'm sure someone else can tell us). As an example of a good application of the Mealy machine, suppose your objects are layers in a communication protocol. A number of these models look like upside down daisies: a linear setup sequence leads down to a Ready state, in which several different types event are accepted with transitions which loop back (The head of the daisy.) With a Moore machine, the event-processing takes place in states clustered around the ready state. Once the processing is done, the machine needs to be placed back in the Ready state. (Alternatively, the event carries an "opcode", which is tested by the action routine of the Ready state, an n-way branch occurs in the Ready state's action and the processing occurs there. Pretty ugly. PT also discouraged this approach when I was learning the method.) This creates the need for the self-directed "I'm Done" event, and this event must be of higher priority than the events which arrive from outside the machine. With a Mealy machine (i.e., action on transition), the actions are put "on the petal of the flower", the action is discriminated by the event ID rather than an opcode, and the transition back to Ready is done without pseudo-event machinery. >Mealy machines, per se, are not allowed in the method. My PT materials (1993 course #3, appendix B) were less insistent about this. Unfortunately, a lot of the SM tools only support Moore, so the issue is moot. -Chris --------------------------------------------------- Chris Lynch Abbott AIS San Diego CA lynchcd@hpd.abbott.com "If you're as clever as you can be when you design it, how will you ever debug it?" Kernighan and Plauger --------------------------------------------------- Subject: RE: (SMU) is a self directed event special (was: Anonymous event "Nau, Peter" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lynch: You make a good point re: the "I'm done" kind of event. I wish I could remember some examples that justify the need for processing self-directed events first, so we could consider whether a Mealy machine would eliminate those events. says in section 5.6, Order of Receiving Events: "In OOA91, the only rule regulating the order in which events are received applied to events transmitted between a single sender/receiver pair. In all other cases OOA91 made no assumptions, and the analyst was directed to ensure that the state models operated properly regardless of the order in which events were received. However, in some cases this policy required the analyst to provide additional logic that had no real value in the domain under consideration. To remedy this problem, OOA96 imposes an additional rule:" ... and the rule for expediting self-directed events follows. I'm not at all religious about the kind of FSM one uses (even though some tools are), and it is certainly true that Mealy machines are simpler and more appealing under some (but not all) circumstances. My way of remembering which is which: "Moore machines have more states." :-) I don't think "parsimony" of states is necessarily the best criterion for the choice, but simplicity, maintainability, clarity and understandability arguably are. -Peter > -----Original Message----- > From: Lynch, Chris D. SDX [SMTP:LYNCHCD@HPD.Abbott.com] > Sent: Monday, November 29, 1999 3:57 PM > To: 'shlaer-mellor-users@projtech.com' > Subject: RE: (SMU) is a self directed event special (was: Anonymous > event notification) > > "Lynch, Chris D. SDX" writes to > shlaer-mellor-users: > -------------------------------------------------------------------- > > Responding to Nau: > > > >"Nau, Peter" writes to shlaer-mellor-users: > >-------------------------------------------------------------------- > > > >Given that it is straightforward to transform a Mealy machine into a > Moore > >machine and vice-versa, it seems unlikely that the type of machine has > >bearing on the need for self-directed events. > >There is another reason for needing self-directed events (which I can't > >remember at the moment, but I'm sure someone else can tell us). > > As an example of a good application of the Mealy machine, > suppose your objects are layers in a communication protocol. > > A number of these models look like upside down daisies: a linear > setup sequence leads down to a Ready state, in which several > different types event are accepted with transitions which loop back > (The head of the daisy.) > > With a Moore machine, the event-processing takes place in states > clustered around the ready state. Once the processing is done, > the machine needs to be placed back in the Ready state. > (Alternatively, the event carries an "opcode", which is tested by > the action routine of the Ready state, an n-way branch occurs > in the Ready state's action and the processing occurs there. > Pretty ugly. PT also discouraged this approach when > I was learning the method.) > This creates the need for the self-directed "I'm Done" event, and > this event must be of higher priority than the events which > arrive from outside the machine. With a Mealy machine > (i.e., action on transition), the actions are put "on the petal of the > flower", > the action is discriminated by the event ID rather than an > opcode, and the transition back to Ready is done without > pseudo-event machinery. > > >Mealy machines, per se, are not allowed in the method. > > My PT materials (1993 course #3, appendix B) were less insistent about > this. > Unfortunately, a lot of the SM tools only support Moore, so the issue > is moot. > > -Chris > > --------------------------------------------------- > Chris Lynch > Abbott AIS > San Diego CA > lynchcd@hpd.abbott.com > > "If you're as clever as you can be when you design it, > how will you ever debug it?" Kernighan and Plauger > --------------------------------------------------- Subject: RE: (SMU) is a self directed event special (was: Anonymous event David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- "Lynch, Chris D. SDX" wrote: > With a Moore machine, the event-processing takes place in states > clustered around the ready state. Once the processing is done, > the machine needs to be placed back in the Ready state. ... > This creates the need for the self-directed "I'm Done" event, and > this event must be of higher priority than the events which > arrive from outside the machine. With a Mealy machine > (i.e., action on transition), the actions are put "on the petal of the > flower", > the action is discriminated by the event ID rather than an > opcode, and the transition back to Ready is done without > pseudo-event machinery. This scenario can often be eliminated by defining transitions to all other states from every state. Once an action is complete then, by definiton, the state is idle. If no processing is required on entry to the "ready" state then no "ready" state is needed. Other techniques involve using multiple objects. Instead of putting the opcode as supplemental data, you can deliver the event polymorphically to an "operation" object. None the "legal" approaches to this problem, including the self-directed event, feel right. When modeling it, I feel as if I'm describing a solution, not the problem. They all seem to violate "one-fact-in-one-place". > >Mealy machines, per se, are not allowed in the method. > > My PT materials (1993 course #3, appendix B) were less > insistent about this. > Unfortunately, a lot of the SM tools only support Moore, > so the issue is moot. OOA96, page 5: "We want to emphasize that we do not attach great significance to the particular notations employed in the work products of OOA." Mealy and Moore are formally equivalent, so either can be used. Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: Re: (SMU) is a self directed event special (was: Anonymous event "David Harris" writes to shlaer-mellor-users: -------------------------------------------------------------------- responding to Lynch, Chris D. SDX wrote: > As an example of a good application of the Mealy machine, > suppose your objects are layers in a communication protocol. > > A number of these models look like upside down daisies: a linear > setup sequence leads down to a Ready state, in which several > different types event are accepted with transitions which loop back > (The head of the daisy.) I have also often seen the pattern you describe above and it always concerns me. The reason is that the events received in the Ready state do not cause a state transition, they actually cause processing to be performed that - due to the use of self directed events - is performed without a state change. To me this seems like it should be performed as a synchronous service rather than an asynchronous service. Leon Starr talked about Ready state pattern at SMUG (UK) this year in a talk on 'How to avoid Bad Behaviour' my notes suggest that his believe is that such a pattern is actually just a function and as not been modelled correctly. Unfortunately he did not have time to expand on the subject as much as I would have liked. A further concern that I have with the self directed event is that I have seen it used far to many times to turn an STD into a flow chart. But I guess that is an issue that must be guarded against at review. I have to admit that I agree with Whipp's comment (03:16 posting) that "None o fthe "legal" approaches to this problem, including the self-directed event, feel right.". Hopefully this thread could go some way to answering the problem. Dave Harris Subject: RE: (SMU) is a self directed event special (was: Anonymous event Tristan Pye writes to shlaer-mellor-users: -------------------------------------------------------------------- > "Lynch, Chris D. SDX" writes to > shlaer-mellor-users: > -------------------------------------------------------------------- > > Responding to Nau: > > >"Nau, Peter" writes to shlaer-mellor-users: > >Mealy machines, per se, are not allowed in the method. > > My PT materials (1993 course #3, appendix B) were less > insistent about this. > Unfortunately, a lot of the SM tools only support Moore, so the issue > is moot. The OOA-of-OOA that I have (from PT recently) describes both Mealy and Moore state machines. I assume that this means it is 'officially' part of the method - even if I've never seen a SM tool that allows Mealy. Tristan ----------------------------------- Tristan Pye Aerosystems International West Hendford Yeovil BA20 2AL Tel: +44 (1935) 443033 Fax: +44 (1935) 443038 E-Mail: tristan.pye@aeroint.com Web: www.aeroint.com Subject: Re: (SMU) is a self directed event special (was: Anonymous event lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Harris... > I have also often seen the pattern you describe above and it always > concerns me. The reason is that the events received in the Ready > state do not cause a state transition, they actually cause > processing to be performed that - due to the use of self directed > events - is performed without a state change. To me this seems > like it should be performed as a synchronous service rather than an > asynchronous service. > > Leon Starr talked about Ready state pattern at SMUG (UK) this > year in a talk on 'How to avoid Bad Behaviour' my notes suggest > that his believe is that such a pattern is actually just a function > and as not been modelled correctly. Unfortunately he did not have > time to expand on the subject as much as I would have liked. FWIW, I agree with you and Starr in that most such patterns probably should be in synchronous services. A major clue that this should be case is the situation where the 'petal' states generate no event other than the self-directed event back to ready. In this (rather common) situation the various 'petal' actions have no effect on flow of control at the domain level. If that is the case, then the processing should be in a transform somewhere. Unfortunately, this introduces two new problems. The first is the issue of what to do with all those events that trigger the 'petal' operations. Typically this is handled by keeping the Ready state but having a single reflexive event where the Ready action invokes the transform using data from the event as an argument. But this can lead to the second problem where the state model only has one state, the Ready state, if it has no other interesting behavior and it exists throughout the execution -- which is a pretty shabby life cycle. One way around the last problem is to have a synchronous service associated with a passive object that performs the transform duties. Whoever generated the triggering events then simply invokes the transform. Most tools seem to support this, but the methodology seems pretty clear in making this a no-no because the invoker's ADFD can use data accessors and create/delete accessors but not tests or transforms from other objects. The rationale for this -- reducing the degree of coupling -- is certainly laudable. Without it one could easily fall into the mire of nested functional invocations that make a rat's nest of responsibility based systems. I believe one of the most attractive features of S-M models is that when I look at a state action I can be certain that its behavior is entirely self-contained. This is a major boon to maintainability and debugging. I would be reluctant to open up this Pandora's Box for the general case just to eliminate a particular inelegant state model pattern. So I would opt for the single state state machine or a set of modeling standards that prevented abuse of the passive object transform (i.e., Thou shalt not invoke other object's transforms from a transform). -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) is a self directed event special (was: Anonymous event lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lynch... > I think this argument is a distraction from your real goal, which > appears to be to provide simpler event routing (via relationships.) Of course. But, like Mount Everest, it is there. > But this diversion is just too fascinating, so... > > Self-directed events are > a) special and > b) not special. > > 99% of the self-directed events one encounters (type a) can be eliminated > by use of the Mealy machine, which is allowed in the method. > IMHO, the remaining 1% of internal events (type b) don't need special > handling, > (i.e., prioritization) because in a good model they will arrive when the > model is in > the correct state to handle them. This is probably true. But until tools offer the choice ya gotta dance with the one that brung ya. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: RE: (SMU) "Functional" State Models (was: is a self directed even "Lynch, Chris D. SDX" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Harris: ------------------------- >> I have also often seen the pattern you describe above and it always >> concerns me. The reason is that the events received in the Ready >> state do not cause a state transition, they actually cause >> processing to be performed that - due to the use of self directed >> events - is performed without a state change. To me this seems >> like it should be performed as a synchronous service rather than an >> asynchronous service. The problem with making it a synchronous service is that the ability to perform the service may be state-dependent; hence the use of events is more natural. >> Leon Starr talked about Ready state pattern at SMUG (UK) this >> year in a talk on 'How to avoid Bad Behaviour' my notes suggest >> that his belief is that such a pattern is actually just a function >> and has not been modelled correctly. Unfortunately he did not have >> time to expand on the subject as much as I would have liked. > This comment sounds similar to "[this object] is actually just a function and _therefore_ has not been modeled correctly", which I have heard on many occasions from some OOA analysts. I understand the sentiment behind such pronouncements, but more often than not the people making them do not have a better alternative, so we proceed with the obvious, "functional" models rather than trying to find that elusive, "correct" solution. ("Correctness" being determined by the OO police.) Someone else suggested that the Mealy machine should be avoided because it can lead to flowcharts masquerading as state models. I acknowledge the danger--in fact I have fallen into it myself :-) . But I think this argument (i.e., the more powerful model is more easily misused) has to be balanced against the reality that PT (in OOA96) prioritized self-directed "I'm done" events for a reason: such events were necessary in some circumstances and the standard Moore model was not meeting analysts' requirements. Hence the grafting on of a Mealy-like feature. -Chris --------------------------------------------------- Chris Lynch Abbott AIS San Diego CA lynchcd@hpd.abbott.com "If you're as clever as you can be when you design it, how will you ever debug it?" Kernighan and Plauger --------------------------------------------------- Subject: RE: (SMU) is a self directed event special (was: Anonymous event "Lynch, Chris D. SDX" writes to shlaer-mellor-users: -------------------------------------------------------------------- >Tristan Pye writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >The OOA-of-OOA that I have (from PT recently) describes both Mealy and Moore >state machines. I assume that this means it is 'officially' part of the >method - even if I've never seen a SM tool that allows Mealy. > >Tristan Thank you! That makes things a lot easier. Maybe the tool vendors will take notice...? -Chris --------------------------------------------------- Chris Lynch Abbott AIS San Diego CA lynchcd@hpd.abbott.com "If you're as clever as you can be when you design it, how will you ever debug it?" Kernighan and Plauger --------------------------------------------------- Subject: RE: (SMU) is a self directed event special (was: Anonymous event "Lynch, Chris D. SDX" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lahman: ----------------------------- > >FWIW, I agree with you and Starr in that most such patterns probably > should > >be in synchronous services. A major clue that this should be case is > the > >situation where the 'petal' states generate no event other than the > >self-directed event back to ready. In this (rather common) situation > the > >various 'petal' actions have no effect on flow of control at the domain > >level. If that is the case, then the processing should be in a > transform > >somewhere. > > > >Unfortunately, this introduces two new problems. The first is the issue > of > >what to do with all those events that trigger the 'petal' operations. > >Typically this is handled by keeping the Ready state but having a single > >reflexive event where the Ready action invokes the transform using data > from > >the event as an argument. But this can lead to the second problem where > the > >state model only has one state, the Ready state, if it has no other > >interesting behavior and it exists throughout the execution -- which is a > >pretty shabby life cycle. > > > >One way around the last problem is to have a synchronous service > associated > >with a passive object that performs the transform duties. My premise was that there were interesting states other than the Ready state, so the object would need to be active rather than passive. Another point to be made on behalf of the Mealy machine is that it seems to be the more popular state modeling formalism for software development, as well as being popular in EE, telecomms, and protocol specification. In the interests of sharing knowledge freely it would be a Good Thing if I could plop their models into an SM tool without a lot of painful modifications. -Chris --------------------------------------------------- Chris Lynch Abbott AIS San Diego CA lynchcd@hpd.abbott.com "If you're as clever as you can be when you design it, how will you ever debug it?" Kernighan and Plauger --------------------------------------------------- Subject: RE: (SMU) is a self directed event special (was: Anonymous event David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- "Lynch, Chris D. SDX" wrote: > I think this argument is a distraction from your real goal, which > appears to be to provide simpler event routing (via relationships.) Well yes, but one of my statements for that discussion was that we'd have to consider the impact on self-directed events. We seem to have reached the point now where we agreed that we should at least consider it (even if the answer is that there is no impact). I think that, for both discussions, it would be useful to investigate events, and their properties for routing, delivery and mapping. This should be treated as a technical note: an essential part of constructing a model. 1. Instance-to-instance events A destination (for the generator-process) might be found by: 1a. following a relationship chain 1b. using an accesssor-filter (e.g. "find CAR with color=red") Either of these might, coincidentally, result in a self-directed event of your type-b (the 1% that need no expediation). These events may be mapped if delivered to a supertype instance and delivered polymorphically. It seems that the routing for (1a) could be formalised on the OCM. Could this simplification be used for (1b)? Could we use something similar? [For example, could we define a (M) referential attribute (with automatic update), and then navigate it as any other relationship.] Another question is, if routing is moved to the relationships, then can polymorphism be viewed as routing along an is-a relationship. Would this simplify the OOA-of-OOA? What would happen to event mapping? 2. Self-directed events These are explicitly routed to "self". Polymorthic mapping is allowed, but its utility is questionable. Uses for these include: 2a. emulating action-on-transition (mealy) 2b. emulating action-on-exit 2c. emulating "hold" events in conjunction with flag-attribute There are other possible uses, but most seem to be asking for trouble. Feel free to add to the list! I've used the loaded language "emulating..." to imply that there may be other ways of modeling the problems. The model becomes an implementation (in SM-OOA), not an analysis. It should also be noted that (2c) can go wrong in simultaneous time unless (non-standard) synchronization techniques are used. 3. Creation events The destination is a class, not an object. They cannot be self directed and cannot be polymorphic. 4. Assigner events These are sent to an assigner (which is on a relationship). They have no destination-instance, nor can they be polymorphic. I'm not sure if an assigner can use self-directed events. If it can, then they'll be the expediated kind. > Self-directed events are > a) special and > b) not special. > > 99% of the self-directed events one encounters (type a) can > be eliminated by use of the Mealy machine, which is allowed > in the method. IMHO, the remaining 1% of internal events > (type b) don't need special handling, (i.e., prioritization) > because in a good model they will arrive when the model is > in the correct state to handle them. Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: RE: (SMU) is a self directed event special (was: Anonymous event David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > Even if I accept your thesis (which I don't because I believe the > self-directed event rule is a specialization of the same-pair > rule), each rule has an is-a relationship to 'event-ordering-rule'. > That means that they are specializations of the same thing, which > is exactly my point anyway. There are two rebutals to that last statement. 1. We were talking about events, not event ordering rules. The fact that 2 rules about events have something in common does not imply that the events do. If 1 rule subtype refers to one event subtype; and the other rule subtype refers to a different event type, then the only thing that the events have in common is that they are subtypes of event. 2. You can always identify a supertype to link two classes. Such a supertype only defines a limited commonality. For example, Jello and Skyscrapers might have a common property "wobbles when shaken". You cannot, from this commonality, infer that you can eat Skyscrapers. (Similarly you cannot infer, from the fact that 2 rules refer to event ordering, that both events should appear on the OCM; nor that the pair-rule applies to self-directed events.) Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. 'archive.9912' -- Subject: RE: (SMU) "Functional" State Models (was: is a self directed even lschneid@eng.delcoelect.com (Lee Riemenschneider) writes to shlaer-mellor-users: -------------------------------------------------------------------- "Lynch, Chris D. SDX" writes to shlaer-mellor-users: -------------------------------------------------------------------- >Responding to Harris: >------------------------- >>> Leon Starr talked about Ready state pattern at SMUG (UK) this >>> year in a talk on 'How to avoid Bad Behaviour' my notes suggest >>> that his belief is that such a pattern is actually just a function >>> and has not been modelled correctly. Unfortunately he did not have >>> time to expand on the subject as much as I would have liked. >> > >This comment sounds similar to "[this object] is actually just a >function and _therefore_ has not been modeled correctly", which >I have heard on many occasions from some OOA analysts. >I understand the sentiment behind such pronouncements, >but more often than not the people making them do not have >a better alternative, so we proceed with the obvious, "functional" >models rather than trying to find that elusive, "correct" solution. >("Correctness" being determined by the OO police.) > Of course, in this case, you have to consider the source of the pronouncement. My impression of Leon was that he is much more concerned with the practical than the theoretical. I imagine that Starr calling the Ready state pattern "Bad Behaviour" means that he's never encountered one that couldn't be "fixed". Subject: (SMU) Software-only H/W synchronization? Allen Theobald writes to shlaer-mellor-users: -------------------------------------------------------------------- Greetings, I participated in a low-level hardware discussion yesterday centering around two CPUs communicating via dual-port ram (DPRAM). Apparently what's missing from the hardware is a way to check the value of one of the memory locations and if it's set to a specific value (semaphore available) then change the value (semaphore taken) all in one instruction cycle. Oh, and the start of that access to the DPRAM must lock out the other side from attempting the same access. Now my question: do semaphores, monitors, and event-counters *require* help from the hardware? Are there any software-only mechanisms to prevent two processes (one on each CPU) from simultaneously accessing the DPRAM? Kind Regards, Allen Theobald Nova Engineering, Inc. Cincinnati, OH, USA, 45246 http://www.nova-eng.com mailto:allent@nova-eng.com Subject: Re: (SMU) is a self directed event special lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > 1. We were talking about events, not event ordering rules. The > fact that 2 rules about events have something in common does > not imply that the events do. If 1 rule subtype refers to > one event subtype; and the other rule subtype refers to > a different event type, then the only thing that the events > have in common is that they are subtypes of event. Maybe that is the problem here. I am talking about event ordering rules. Besides, it was your subtyping example. B-) > 2. You can always identify a supertype to link two classes. > Such a supertype only defines a limited commonality. For > example, Jello and Skyscrapers might have a common property > "wobbles when shaken". You cannot, from this commonality, > infer that you can eat Skyscrapers. But in this case one can infer the property of being ordered, since it applies to both subtypes. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: RE: (SMU) "Functional" State Models (was: is a self directed "Peter J. Fontana" writes to shlaer-mellor-users: -------------------------------------------------------------------- > -----Original Message----- > From: [mailto:owner-shlaer-mellor-users@projtech.com]On Behalf Of Lee > Riemenschneider > Sent: Wednesday, December 01, 1999 8:02 AM > Of course, in this case, you have to consider the source of the > pronouncement. My impression of Leon was that he is much more > concerned with > the practical than the theoretical. I imagine that Starr calling > the Ready > state pattern "Bad Behaviour" means that he's never encountered one that > couldn't be "fixed". Lee - I sincerely hope you have some inside joke running with Leon - otherwise this kind of comment is way out of line for this list. Let's look to Whipp and Lahman as examples for how to disagree with respect and civility. Thank you. Actually I'd like to throw my lot in with the "functional/spider state models are *generally* bad" crowd. State diagrams in OO Analysis (OOA, UML) should represent lifecycles of the Objects/Classes they are connected to. As such, having the state model for an object that simply accepts requests and completes them with no other aspects to its lifecycle causes one to take pause and look for the following problems: - the object abstraction is weak, and re-abstraction or at least reallocation of the "funcions" should be considered - some or all of the "legs" of the spider (functions) should be made available as object-level synchronous services This is based on our experience here at Pathfinder, and our understanding of the method in general (independent of notation). It could be that Lee has keyed on some other aspect of the problem, and I'm banging on the wrong nail here. In general, I find that inelegant or unnatural forms of Analysis expression (like the "spider state model") indicate perhaps with some additional effort, or with a freer perspective, that a cleaner and more effective abstraction could be applied. _______________________________________________________ Pathfinder Solutions Inc. www.pathfindersol.com | 888-OOA-PATH | | effective solutions for software engineering challenges| | Peter Fontana voice: +01 508-384-1392 | peterf@pathfindersol.com fax: +01 508-384-7906 | _______________________________________________________| Subject: Re: (SMU) Software-only H/W synchronization? Erick.Hagstrom@envoy.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Sounds like an assigner to me. You have to have some distributed object infrastructure, like CORBA :-) or DCOM :-( (smileys represent personal opinion only), and the assigner would have to be a singleton handling the entire distributed environment. This sounds to me like a really slow way of governing your DPRAM, but if they didn't put that kind of stuff in to begin with, I'm not sure what else you can do. Allen Theobald on 12/01/99 08:12:19 AM Please respond to shlaer-mellor-users@projtech.com To: shlaer-mellor-users@projtech.com cc: (bcc: Erick Hagstrom/Nashville/Envoy) Subject: (SMU) Software-only H/W synchronization? Allen Theobald writes to shlaer-mellor-users: -------------------------------------------------------------------- Greetings, I participated in a low-level hardware discussion yesterday centering around two CPUs communicating via dual-port ram (DPRAM). Apparently what's missing from the hardware is a way to check the value of one of the memory locations and if it's set to a specific value (semaphore available) then change the value (semaphore taken) all in one instruction cycle. Oh, and the start of that access to the DPRAM must lock out the other side from attempting the same access. Now my question: do semaphores, monitors, and event-counters *require* help from the hardware? Are there any software-only mechanisms to prevent two processes (one on each CPU) from simultaneously accessing the DPRAM? Kind Regards, Allen Theobald Nova Engineering, Inc. Cincinnati, OH, USA, 45246 http://www.nova-eng.com mailto:allent@nova-eng.com Subject: Re: (SMU) Software-only H/W synchronization? Gregory Rochford writes to shlaer-mellor-users: -------------------------------------------------------------------- Allen Theobald wrote: > > Allen Theobald writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Greetings, > > I participated in a low-level hardware discussion yesterday centering > around two CPUs communicating via dual-port ram (DPRAM). > > Apparently what's missing from the hardware is a way to check the > value of one of the memory locations and if it's set to a specific > value (semaphore available) then change the value (semaphore taken) > all in one instruction cycle. Oh, and the start of that access to the > DPRAM must lock out the other side from attempting the same access. > > Now my question: do semaphores, monitors, and event-counters *require* > help from the hardware? > No, but it makes it easier. :) > Are there any software-only mechanisms to prevent two processes (one > on each CPU) from simultaneously accessing the DPRAM? > My first impression was to say no, but then I think you could implement a double door (is that the canonical name?) approach to synchronize the two accesses. It works something like: set a word to indicate you want to do something. Check that the other CPU doesn't want to do something. If it doesn't, check your value again to make sure it's still ok. If so, then change the shared value. The metaphor is a hallway between two (n) rooms. Only one person is allowed in the hallway at a time. So you check if it's empty, then open your door, step in the hallway, check if it's still empty (and all other doors are closed), and close your door. I'm a little fuzzy on who does what when another door is open, or the hallway isn't empty. But that's the idea. I would also think the DPAM would provide a way to lock out writes from one port (exclusive access). Which would make things easier. You could set exclusive access, and then do the normal test-and-set instruction, then resume normal access. best gr I would also think Subject: Re: (SMU) Software-only H/W synchronization? chris.m.moore@gecm.com writes to shlaer-mellor-users: -------------------------------------------------------------------- > Allen Theobald writes to shlaer-mellor-users: > Apparently what's missing from the hardware is a way to check the > value of one of the memory locations and if it's set to a specific > value (semaphore available) then change the value (semaphore taken) > all in one instruction cycle. I'm basing the following on the fact that DPRAM can be read but not written on both ports simultaneously. Single instruction test and set type instructions only work for one processor ie. you'd be OK with the single instruction as long as the semaphore was only used by one processor. Er, I guess this was not what you intended. Imagine both processors simultaneously reading a sempahore, incrementing it and writing the same value back to the DPRAM. Not good. (Of course, you'll need extra logic to handle the contention on the write access). > Oh, and the start of that access to the > DPRAM must lock out the other side from attempting the same access. Write access? > Now my question: do semaphores, monitors, and event-counters *require* > help from the hardware? Yes. Well, semaphores do. > Are there any software-only mechanisms to prevent two processes (one > on each CPU) from simultaneously accessing the DPRAM? Sounds like you need two circular buffers, one for communicating in each direction. Put events (knew we'd get to SM eventually :-) ) at the tail of the buffer and move the tail of buffer pointer. Other processor takes events off the head of the buffer and moves the head of buffer pointer while the pointers aren't equivalent. If you really wanted to be flash you could get one processors write to the tail of buffer pointer address trigger an interrupt in the other processor... Chris Moore Subject: RE: (SMU) Software-only H/W synchronization? "Lynch, Chris D. SDX" writes to shlaer-mellor-users: -------------------------------------------------------------------- How about a spin lock on setting the shared location to a particular non-zero value unique to each processor? E.g., do { if (sharedByte == 0) // available? { sharedByte = MyProcessorID; // try to get it Delay(x); // let other processors finish write cycles } } while (sharedByte != MyProcessorID) sharedByte = 0; // release the resource This might work, depending on what your hardware does with two simultaneous writes. Is there a bus arbitration scheme? Do all the 1 bits get OR'ed together? Do a one and a zero on the same data line generate and in-between value? I think the solution I gave handles this by requiring the right bit pattern to appear after a "collision period". An additional idea would be to require that multiple tests return the desired value. There might be some contention but I think one processor will eventually win within a few reads and writes. This could be used with task ID's (or object ID's) too, if all task ID's were unique across the population of requestors. -Chris --------------------------------------------------- Chris Lynch Abbott AIS San Diego CA lynchcd@hpd.abbott.com "If you're as clever as you can be when you design it, how will you ever debug it?" Kernighan and Plauger --------------------------------------------------- >Are there any software-only mechanisms to prevent two processes (one >on each CPU) from simultaneously accessing the DPRAM? Subject: RE: (SMU) "Functional" State Models (was: is a self directed "Peter J. Fontana" writes to shlaer-mellor-users: -------------------------------------------------------------------- I would like to extend my apologies to Lee Riemenschneider - I obvoiusly misread his intent, and came at this all wrong. Also - my apologies to you readers for wasting your time with non-technical junk. Thank you for your patience. Subject: Re: (SMU) is a self directed event special (was: Anonymous event lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > 1. Instance-to-instance events > > A destination (for the generator-process) might be found by: > > 1a. following a relationship chain > 1b. using an accesssor-filter (e.g. "find CAR with color=red") > > Either of these might, coincidentally, result in a self-directed > event of your type-b (the 1% that need no expediation). These > events may be mapped if delivered to a supertype instance and > delivered polymorphically. > > It seems that the routing for (1a) could be formalised on the > OCM. Could this simplification be used for (1b)? Could we > use something similar? [For example, could we define a (M) > referential attribute (with automatic update), and then > navigate it as any other relationship.] I would think so. I think there is a need for 1b just to avoid the OIM getting too cluttered with relationships for puce CARs, ochre CARs, mauve CARs,... I would also think that one would want a single, consistent notation that described both. While the (M) approach has a certain elegance I think one has to tread carefully here. I think this would take a lot of cogitation and a lot more theory than I have available to ensure that it works in all cases (e.g., elements of compound identifiers that are shared). I would also worry about the clutter of such attributes on the OIM if a separate attribute is needed, say, for each possible color of CAR that might be collected. > Another question is, if routing is moved to the relationships, > then can polymorphism be viewed as routing along an is-a > relationship. Would this simplify the OOA-of-OOA? What would > happen to event mapping? If one is buying into the rest, then I think consistency would require routing along the is-a. But this might be academic if one belongs to the one-instance camp -- if the supertype and subtype are just different views of the same instance, then there is no navigation. I would think event mapping remains at the subtype level. One still needs to know what transition the supertype event maps into in the local state machine. The relationship navigation just gets to the right state machine. In fact, I would argue that if one moves the navigation out of the ADFDs, then one has to have event mapping tagged to every navigation description. Since the target is not known an event would be identified generically where it is generated (i.e., its identifier would not match the event in the receiving state machine). > 2. Self-directed events > > These are explicitly routed to "self". Polymorthic mapping is > allowed, but its utility is questionable. Why would the utility be any different? I think would use them in the supertype for exactly the same reasons I would use them in the subtype. > Uses for these include: > > 2a. emulating action-on-transition (mealy) > 2b. emulating action-on-exit > 2c. emulating "hold" events in conjunction with flag-attribute > > There are other possible uses, but most seem to be asking for > trouble. Feel free to add to the list! 2d. Default state. In the normal course of events an instance initially places itself in a reset state whose values are in the data packet. But outlanders may subsequently issue the same event (now reflexive) with different data as necessary. 2e. Alternating iteration. Two instances conduct a multi-step iteration by conversing. The instance that does the last step in the set of alternating steps needs to send an event to both instances to reset both to the beginning. 2f. Partial processing. A partial reset may require activity X while a full reset requires activities X and Y. When the full reset is desired Y will invoke X via a self directed event to ensure that the reset is completed before anything else happens. 2g. Unusual circumstances. A particular event may require a thread of processing through the state machine to be completed before doing anything else. This would probably be handled by a transition to a state that generated the appropriate self-directed events. An example is emergency power down. 2h. Optional processing. One enters S1 based upon an externally generated event. But the state of the system when S1 is entered may dictate a transition to S2. Whether in S1 or S2, one transitions to S3 based upon an externally generated event. 2i. Priority processing. An event my require a different thread of processing through the state machine than normal. An example might be ^C processing where you go to a wait state where all events except Reset are ignored. > 3. Creation events > > The destination is a class, not an object. They cannot be self > directed and cannot be polymorphic. Why not polymorphic? If the event is a bridge event, the data will determine which subtype to create (since the sender can't know the subtypes exist). You can make the bridge smart enough to do this, but why not include it as a filter on the is-a relationship? If you can navigate to red CARs based upon attribute value, why not navigate to subtype based upon data packet value? Certainly a bad precedent in general, but perhaps acceptable for create events to supertypes. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Software-only H/W synchronization? "Stephen Irons" writes to shlaer-mellor-users: -------------------------------------------------------------------- Lynch's solution is acceptable provided that interrupts are disabled between reading the shared variable as 0 and writing the processor ID. Otherwise the Delay(x) will have to included time for interrupt processing; if a multi-tasking system is used, the delay may have to be very long indeed. Lamport 1985 "A Fast Mutual Exclusion Algorithm" provides a brief background to the problem of mutual exclusion between processors using shared memory. He discusses the algorithm proposed by Lynch, and proposes two other algorithms, along with an informal proof that 5 memory accesses of 2 shared variables is the minimum required to ensure a bounded delay independent of the number of competing processors or tasks. It was published on the Web and a search turned it up for me about 6 months ago, though I don't recall the address off-hand. Stephen Irons "Lynch, Chris D. SDX" wrote: > "Lynch, Chris D. SDX" writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > How about a spin lock on setting the shared location to a particular > non-zero value unique to each processor? > > E.g., > > do > { > if (sharedByte == 0) // available? > { > sharedByte = MyProcessorID; // try to get it > Delay(x); // let other processors finish write > cycles > } > } while (sharedByte != MyProcessorID) > > > > sharedByte = 0; // release the resource > > This might work, depending on what your hardware does > with two simultaneous writes. Is there a bus arbitration > scheme? Do all the 1 bits get OR'ed > together? Do a one and a zero on the same > data line generate and in-between value? > > I think the solution I gave handles this by requiring the > right bit pattern to appear after a "collision period". > An additional idea would be to require that multiple > tests return the desired value. > > There might be some contention but I think one processor will > eventually win within a few reads and writes. This could be > used with task ID's (or object ID's) too, if all task ID's were unique > across > the population of requestors. > > -Chris > > --------------------------------------------------- > Chris Lynch > Abbott AIS > San Diego CA > lynchcd@hpd.abbott.com > > "If you're as clever as you can be when you design it, > how will you ever debug it?" Kernighan and Plauger > --------------------------------------------------- > > >Are there any software-only mechanisms to prevent two processes (one > >on each CPU) from simultaneously accessing the DPRAM? Subject: Re: (SMU) Software-only H/W synchronization? Erick.Hagstrom@envoy.com writes to shlaer-mellor-users: -------------------------------------------------------------------- http://gatekeeper.dec.com/pub/DEC/SRC/research-reports/abstracts/src-rr-007 .html "Stephen Irons" on 12/01/99 02:06:25 PM Please respond to shlaer-mellor-users@projtech.com To: shlaer-mellor-users@projtech.com cc: (bcc: Erick Hagstrom/Nashville/Envoy) Subject: Re: (SMU) Software-only H/W synchronization? "Stephen Irons" writes to shlaer-mellor-users: -------------------------------------------------------------------- Lamport 1985 "A Fast Mutual Exclusion Algorithm" provides a brief background to the problem of mutual exclusion between processors using shared memory. He discusses the algorithm proposed by Lynch, and proposes two other algorithms, along with an informal proof that 5 memory accesses of 2 shared variables is the minimum required to ensure a bounded delay independent of the number of competing processors or tasks. It was published on the Web and a search turned it up for me about 6 months ago, though I don't recall the address off-hand. Stephen Irons Subject: RE: (SMU) is a self directed event special (was: Anonymous event David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > > 1. Instance-to-instance events > > 1a. following a relationship chain > > 1b. using an accesssor-filter (e.g. "find CAR with color=red") > I think there is a need for 1b just to avoid the OIM getting too > cluttered with relationships for puce CARs, ochre CARs, mauve > CARs,... I would also think that one would want a single, > consistent notation that described both. I agree that we'd want to avoid OIM clutter. Also that the (M) aproach I suggested is probably a bit too mechanistic. The problem is described indirectly, suggesting that a more direct description should be possible. We'll need to think about it. > > 2. Self-directed events > > 2a. emulating action-on-transition (mealy) > > 2b. emulating action-on-exit > > 2c. emulating "hold" events in conjunction with flag-attribute > > > > There are other possible uses, but most seem to be asking for > > trouble. Feel free to add to the list! Most of your suggestions are simply examples of the above. I'll take them in reverse order. > 2i. Priority processing. An event my require a different > thread of processing through the state machine than normal. > An example might be ^C processing where you go to a wait > state where all events except Reset are ignored. If the "^C" is an event, then no self directed events are needed. if "^C" is data on a "key pressed" event, then you have domain pollution (a bridge should decode the data and send a "^C" event) Otherwise, the "^C" information will be data in an attribute in the system. This is then an example of 2c: a held-event implemented as attributes. > 2h. Optional processing. One enters S1 based upon an > externally generated event. But the state of the system > when S1 is entered may dictate a transition to S2. > Whether in S1 or S2, one transitions to S3 based upon > an externally generated event. This is a simple example of 2c. The instance enters state S1 and checks one or more attibutes to optionally generate the self-directed event. > 2g. Unusual circumstances. A particular event may > require a thread of processing through the state machine > to be completed before doing anything else. This would > probably be handled by a transition to a state that > generated the appropriate self-directed events. An > example is emergency power down. If the "powerdown" is stored as data, causing each state to generate the next self-directed event, then this is an example of 2c. If, OTOH, "powerdown" is an event, with the destination state then generating the entire sequence of self- directed events, then this is an example of 2a: a complex action on a transition. I'd be very suspicious of a model that contained this. > 2f. Partial processing. A partial reset may require > activity X while a full reset requires activities X > and Y. When the full reset is desired Y will invoke > X via a self directed event to ensure that the reset is > completed before anything else happens. This is a simple example of 2a: Y is the action on the transition of the full-reset event. > 2e. Alternating iteration. Two instances conduct a multi-step > iteration by conversing. The instance that does the last > step in the set of alternating steps needs to send an event > to both instances to reset both to the beginning. My impression of this is that its dubious modeling. If the peers send out an "I'm ready" event on entry to their idle state then no self-directed event is needed. Of course, there might be some pending events held as attributes: an example of 2c. Alternatively, we can view the final state of the conversation as an action-on-transition (2a): the transition takes the first object to its reset state; its action sends the reset event to the second object. In general, however, I view any "conversation" as dubious. > 2d. Default state. In the normal course of events an instance > initially places itself in a reset state whose values are in the data > packet. But outlanders may subsequently issue the same event (now > reflexive) with different data as necessary.> I'm afraid I don't quite understand this one. Which event is self-directed? How does the object know to generate it? Is this another example of 2c? > > 3. Creation events > > > > The destination is a class, not an object. They cannot be self > > directed and cannot be polymorphic. > > Why not polymorphic? If the event is a bridge event, the data will > determine which subtype to create What is a "bridge event"? All inputs from a bridge go via an SDFD (unless you use a syntactic shortcut). An event from an SDFD is a normal event. A creation event cannot be delivered polymorphically because it is delivered to a class, not an instance. There is no instance of the is-a relationship to follow to find the subtype. The identify of the supertype is not known until the creation event has been delivered. At this time, its too late to do a polymophic mapping, even if the subtype already exists. > If you can > navigate to red CARs based upon attribute value, why not navigate to > subtype based upon data packet value? Certainly a bad precedent in > general, but perhaps acceptable for create events to supertypes. How would the architecture know how to map the event-data to an instance ID? This mapping is described in the creation state of the destination object. Are you suggesting that the creation action is executed, and then the event re-delivered?! Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: RE: (SMU) is a self directed event special (was: Anonymous event David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > > 2. Self-directed events > > 2a. emulating action-on-transition (mealy) > > 2b. emulating action-on-exit > > 2c. emulating "hold" events in conjunction with flag-attribute > 2g. Unusual circumstances. A particular event may require > a thread of processing through the state machine to be > completed before doing anything else. This would probably > be handled by a transition to a state that generated the > appropriate self-directed events. An example is emergency > power down. OK, I'll accept this one as a distinct use: to force a sequence of transitions though a state model. Any other events are held in the queue until the sequence is complete. Dave -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: Re: (SMU) Software-only H/W synchronization? Allen Theobald writes to shlaer-mellor-users: -------------------------------------------------------------------- Thanks to all who posted! It looks as if algorithm 1 of this paper wins out! Thanks again. Allen Erick.Hagstrom@envoy.com wrote: > Erick.Hagstrom@envoy.com writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > http://gatekeeper.dec.com/pub/DEC/SRC/research-reports/abstracts/src-rr-007 .html > > "Stephen Irons" on 12/01/99 02:06:25 PM > > Please respond to shlaer-mellor-users@projtech.com > > To: shlaer-mellor-users@projtech.com > cc: (bcc: Erick Hagstrom/Nashville/Envoy) > Subject: Re: (SMU) Software-only H/W synchronization? > > "Stephen Irons" writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > > > Lamport 1985 "A Fast Mutual Exclusion Algorithm" provides a brief background to > the > > problem of mutual exclusion between processors using shared memory. He discusses > the > algorithm proposed by Lynch, and proposes two other algorithms, along with an > informal > proof that 5 memory accesses of 2 shared variables is the minimum required to > ensure > a bounded delay independent of the number of competing processors or tasks. > > It was published on the Web and a search turned it up for me about 6 months ago, > though > I don't recall the address off-hand. > > Stephen Irons Subject: Re: (SMU) is a self directed event special (was: Anonymous event lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > > 2i. Priority processing. An event my require a different > > thread of processing through the state machine than normal. > > An example might be ^C processing where you go to a wait > > state where all events except Reset are ignored. > if "^C" is data on a "key pressed" event, then you have > domain pollution (a bridge should decode the data and > send a "^C" event) It could be a synchronous wormhole in the action that generates the event to the wait state. There are situations in a sequence of action processing where one wants to be selective about when to do the abort (e.g., you want to finish writing the attributes to insure consistent state but you don't want to generate more events). > > 2h. Optional processing. One enters S1 based upon an > > externally generated event. But the state of the system > > when S1 is entered may dictate a transition to S2. > > Whether in S1 or S2, one transitions to S3 based upon > > an externally generated event. > > This is a simple example of 2c. The instance enters > state S1 and checks one or more attibutes to optionally > generate the self-directed event. Note I said 'state of the system'. The synchronous wormhole could apply here as well. But I suppose you could argue these two cases are really the same issue around synchronous wormholes. > > 2g. Unusual circumstances. A particular event may > > require a thread of processing through the state machine > > to be completed before doing anything else. This would > > probably be handled by a transition to a state that > > generated the appropriate self-directed events. An > > example is emergency power down. > > If the "powerdown" is stored as data, causing each state > to generate the next self-directed event, then this is > an example of 2c. > > If, OTOH, "powerdown" is an event, with the destination > state then generating the entire sequence of self- > directed events, then this is an example of 2a: a > complex action on a transition. But the point of this example is that a sequence of self directed events must be generated once one knows there is a powerdown. > I'd be very suspicious of a model that contained this. Why? An emergency shutdown is probably one of the most critical processing sequences to get right in a system. I would certainly want to have that exposed in the OOA. When somebody slams the Big Red Button on our systems it is usually because some twit has has caught his sleeve in the robotic handler or some such. > > 2f. Partial processing. A partial reset may require > > activity X while a full reset requires activities X > > and Y. When the full reset is desired Y will invoke > > X via a self directed event to ensure that the reset is > > completed before anything else happens. > > This is a simple example of 2a: Y is the action on > the transition of the full-reset event. And how does X get done in that case? There is still only one action on the transition. For the full reset event the action would have to do Y and then duplicate the X action on the partial reset event. > > 2e. Alternating iteration. Two instances conduct a multi-step > > iteration by conversing. The instance that does the last > > step in the set of alternating steps needs to send an event > > to both instances to reset both to the beginning. > > My impression of this is that its dubious modeling. If the > peers send out an "I'm ready" event on entry to their idle > state then no self-directed event is needed. Of course, > there might be some pending events held as attributes: an > example of 2c. I agree that you could do it by having everybody send everybody else an "I'm ready" event, but I would be very dubious of that sort of modeling. B-) Consider a Channel object with states Ready, Set Up, Initiated, Fetched, and Biased and transitions Ready -> Set Up Ready -> Baised Set Up -> Initiated Initiated -> Fetched Fetched -> Ready Biased -> Ready When active, a Channel is either Biased or expects the Set Up -> Initiated -> Fetched sequence. I assume that Ready actually does something useful like opening the channel relays. Each transition would be triggered by an external event except Fetched -> Ready. Let's compare the two transitions Biased -> Ready and Fetched -> Ready. I would argue that, from Channel's view, the nature of biasing suggests that Biased -> Ready must be an external event because Channel should not know when it is appropriate to terminate the biasing. OTOH, when I look at the intrinsic properties of Channel, it is probably fair to say that once the results have been fetched, the Channel's work is done and it can go rest. This _suggests_ that a self-directed event is appropriate (i.e., Channel has the knowledge to justify the transition.) Now let's look at things from the domain side. Clearly somebody else in the domain is going to know when it is time to remove a bias, so that object can generate the Biased -> Ready event. This is because at the domain level the notion of biasing has a context of duration, specifically when that duration ends. But what about Fetch -> Ready? Not so clear. The domain has to know when the results were successfully fetched, so Channel has to generate an event for that. But why would anyone in the domain need to send a Thank You note to Channel? There is no compelling reason to do so in the domain. The only notable occurrence is that Channel has fetched the results, but that has been announced by Channel already. The only justification would be simply to get Channel into the right state for the next set of processing, which is probably unknown at the moment. I would be very reluctant to generate an event in some other object just for that because it seems like "Do This" rather than "I'm Done" (i.e., context ridden). > > 2d. Default state. In the normal course of events an instance > > initially places itself in a reset state whose values are in the data > > packet. But outlanders may subsequently issue the same event (now > > reflexive) with different data as necessary.> > > I'm afraid I don't quite understand this one. Which event > is self-directed? How does the object know to generate it? > Is this another example of 2c? The transitions are: S1 -> S2 // This is the self directed event that does a default initialization S2 -> S2 // This is 0-N external events that define non-default initializations The idea is that the action associated with -> S2 must do something interesting like writing to hardware. One needs S1 -> S2 to ensure that the hardware is returned to a known state. But somebody else might want a non-default state, sending their own -> S2 event. > > > 3. Creation events > > > > > > The destination is a class, not an object. They cannot be self > > > directed and cannot be polymorphic. > > > > Why not polymorphic? If the event is a bridge event, the data will > > determine which subtype to create > > What is a "bridge event"? All inputs from a bridge go via an > SDFD (unless you use a syntactic shortcut). An event from an > SDFD is a normal event. I was referring to an event from a bridge that was a create event. It is a normal event, but the issue is about which class it is directed at, supertype or subtype.... > A creation event cannot be delivered polymorphically because > it is delivered to a class, not an instance. There is no > instance of the is-a relationship to follow to find the > subtype. The identify of the supertype is not known > until the creation event has been delivered. At this > time, its too late to do a polymophic mapping, even if > the subtype already exists. I argue that since the domain invoking the bridge can't know about subtypes, there must be information in the data packet that determines which subtype will be created. I am just arguing that it is aesthetically appealing to direct such an event to the supertype class. This allows the decoding of the data packet to be explicitly defined in the domain (below) rather than in the hidden in the bridge. > > If you can > > navigate to red CARs based upon attribute value, why not navigate to > > subtype based upon data packet value? Certainly a bad precedent in > > general, but perhaps acceptable for create events to supertypes. > > How would the architecture know how to map the event-data to > an instance ID? This mapping is described in the creation > state of the destination object. Are you suggesting that the > creation action is executed, and then the event re-delivered?! I am suggesting that one use the same mechanism to select the navigation to class via the is-a relationship that one would use to describe the navigation to only red CARs. The only difference is that in the CARs example the value being tested is an attribute while in the create subtype case it is a data packet value. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: RE: (SMU) is a self directed event special (was: Anonymous event David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: I'll not reply to all your comments; there's too many distractions. If you want me to rejustify any of my previous dismsals, then please let me know. 2i, 2h - your comments seem to add nothing substantive. 2g - already accepted > > > 2f. Partial processing. A partial reset may require > > > activity X while a full reset requires activities X > > > and Y. When the full reset is desired Y will invoke > > > X via a self directed event to ensure that the reset is > > > completed before anything else happens. > > > > This is a simple example of 2a: Y is the action on > > the transition of the full-reset event. > > And how does X get done in that case? There is still only > one action on the transition. For the full reset event the > action would have to do Y and then duplicate the X action > on the partial reset event. Simple: consider 2 states: S1 and S2. The object is in state S1. It can respond to 2 events: Full_Reset and Partial_Reset. Both events cause transitions to S2, which does action X. However, the Full_Reset transition has an action-on-transition that does Y. An advantage of the Moore Machine + self-directed-event approach is that it is easy to allow many transitions to share the same action. So we could allow a transition to reset from all states in the system: all the full- reset events can go via the same Y state (action). In a pure mealy machine, actions can't be shared so easily. > > > 2e. Alternating iteration. Two instances conduct a multi-step > > > iteration by conversing. The instance that does the last > > > step in the set of alternating steps needs to send an event > > > to both instances to reset both to the beginning. > I would argue that, from Channel's view, the nature of > biasing suggests that Biased -> Ready must be an external > event [...] > OTOH, when I look at the intrinsic properties of Channel, > it is probably fair to say that once the results have been > fetched, the Channel's work is done and it can go rest. > This _suggests_ that a self-directed event is appropriate > (i.e., Channel has the knowledge to justify the transition.) It appears that no importance is attached to the "fetched" state: it is simply a transient state that generates a Fetch_Done event. This is clearly an example of my (2a) category: an action on a transition. > The only justification would be simply to get Channel > into the right state for the next set of processing, > which is probably unknown at the moment. You make an excellent argument for a point I made a few posts back: the context of a self directed event is the object, not the domain. They are used as "gotos" within the state model. > > > 2d. Default state. [...] > > I'm afraid I don't quite understand this one. Which event > > is self-directed? How does the object know to generate it? > > Is this another example of 2c? > > The transitions are: > > S1 -> S2 // This is the self directed event that does a default > initialization > S2 -> S2 // This is 0-N external events that define non-default > initializations > > The idea is that the action associated with -> S2 must do > something interesting like writing to hardware. One needs > S1 -> S2 to ensure that the hardware is returned to a > known state. This still doesn't answer the question of how the object knows to self-generate the s1->s2 event. It seems to me that either S1 is transient (=>2a) or S1 reads system state (=>2c). > But somebody else might want a non-default state, sending > their own -> S2 event. So you are saying that someone, outside the object, might send an event with the specific intent of placing the object in a specific state. That set off so many alarm bells that I'm going deaf! >> [polymorphic creation events] > I am suggesting that one use the same mechanism to select the > navigation to class via the is-a relationship that one would > use to describe the navigation to only red CARs. The only > difference is that in the CARs example the value being tested > is an attribute while in the create subtype case it is a data > packet value. I suppose it might be possible. But lets see the mechanism! Are you saying that a requirement on the mechanism is that it should support this use of event data? If the actions that send the creation event knows which subtype to create, then it can send a creation event to that subtype. If it doesn't, then it sends the event to the supertype: the supertype decides which subtype to create. Do we really need to abstract object factories into the OOA-of-OOA? Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: Re: (SMU) is a self directed event special (was: Anonymous event lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > > > > 2f. Partial processing. A partial reset may require > > > > activity X while a full reset requires activities X > > > > and Y. When the full reset is desired Y will invoke > > > > X via a self directed event to ensure that the reset is > > > > completed before anything else happens. > > > > > > This is a simple example of 2a: Y is the action on > > > the transition of the full-reset event. > > > > And how does X get done in that case? There is still only > > one action on the transition. For the full reset event the > > action would have to do Y and then duplicate the X action > > on the partial reset event. > > Simple: consider 2 states: S1 and S2. The object is in state > S1. It can respond to 2 events: Full_Reset and Partial_Reset. > Both events cause transitions to S2, which does action X. > However, the Full_Reset transition has an action-on-transition > that does Y. But then you are doing both Mealy and Moore. I could see picking the model individually for particular state models, but not combining them. When you suggested the action-on-transition, I assumed you were simply choosing Mealy to handle a particular model. > > I would argue that, from Channel's view, the nature of > > biasing suggests that Biased -> Ready must be an external > > event [...] > > OTOH, when I look at the intrinsic properties of Channel, > > it is probably fair to say that once the results have been > > fetched, the Channel's work is done and it can go rest. > > This _suggests_ that a self-directed event is appropriate > > (i.e., Channel has the knowledge to justify the transition.) > > It appears that no importance is attached to the "fetched" > state: it is simply a transient state that generates a > Fetch_Done event. This is clearly an example of my (2a) > category: an action on a transition. No, the action associated with the state has to obtain the value from the hardware. But I think that is beside the point -- it doesn't matter whether the action is Mealy or Moore. My argument is that the knowledge justifying generating the event triggering the Fetched -> Ready transition is intrinsic to the model so the event should be explicitly generated there rather than in some other model. > > The only justification would be simply to get Channel > > into the right state for the next set of processing, > > which is probably unknown at the moment. > > You make an excellent argument for a point I made a few > posts back: the context of a self directed event is > the object, not the domain. They are used as "gotos" > within the state model. Yes and no. I contend one should define the transitions based upon the intrinsic properties of the object behavior. In a separate pass the analyst puts on the Domain Control Flow hat and decides where the events that trigger those transitions should be generated. I still see this as a domain level decision rather than an object level decision. The analyst, however, is free to base that decision on intimate knowledge of the state models. To the extent that the object abstraction suggests it owns the knowledge that defines when the event should be generated, it is an object level decision. In the case of the Bias -> Ready event, another object had that knowledge. So one could argue that all event generation is object level. > > The transitions are: > > > > S1 -> S2 // This is the self directed event that does a default > > initialization > > S2 -> S2 // This is 0-N external events that define non-default > > initializations > > > > The idea is that the action associated with -> S2 must do > > something interesting like writing to hardware. One needs > > S1 -> S2 to ensure that the hardware is returned to a > > known state. > > This still doesn't answer the question of how the object knows > to self-generate the s1->s2 event. It seems to me that either > S1 is transient (=>2a) or S1 reads system state (=>2c). The S1 -> S2 transition is not optional -- it is the default. S1 merely places the data for the default state on the event (usually just some flag value settings). One _always_ sets the hardware to a known state prior to embarking on a new test. During the processing of that test it _may_ become necessary to set hardware to something different than the default state. Whoever decides that (in this case it is the system user via a bridge event) will send the same event with different data values. > > But somebody else might want a non-default state, sending > > their own -> S2 event. > > So you are saying that someone, outside the object, might > send an event with the specific intent of placing the > object in a specific state. That set off so many alarm > bells that I'm going deaf! Not the state machine state. The state of attributes or, in this case, the hardware. > >> [polymorphic creation events] > > I am suggesting that one use the same mechanism to select the > > navigation to class via the is-a relationship that one would > > use to describe the navigation to only red CARs. The only > > difference is that in the CARs example the value being tested > > is an attribute while in the create subtype case it is a data > > packet value. > > I suppose it might be possible. But lets see the mechanism! > Are you saying that a requirement on the mechanism is that > it should support this use of event data? Hey, using relationship navigation that depended upon attribute data was your mechanism! B-) I'm just extending it for subtype create events to look at event data instead while navigating an is-a to a class. I _am_ saying that a create event from outside the domain has no choice but to look at the data packet (or at least the event type) to determine which subtype it goes to. Do it in the bridge or on the is-a, but it has to be done. [I also said I don't think this is a good idea for anything except create events sent to a supertype.] > If the actions that send the creation event knows which > subtype to create, then it can send a creation event to > that subtype. If it doesn't, then it sends the event to > the supertype: the supertype decides which subtype to > create. Do we really need to abstract object factories > into the OOA-of-OOA? How does the supertype do that in the second case? Also, this smacks of creating a separate supertype instance. I don't see it as that elaborate. If you are already allowing the analyst to specify 'red' for an attribute of X as a criteria for navigation, why not allow the analyst to specify 'red' for an event data packet member? It's still "when X.name == 'red'" in either case; X just becomes and object identifier in one case and an event identifier in the other. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: (SMU) Synchronous vs. Asynchronous behavior Bary D Hogan writes to shlaer-mellor-users: -------------------------------------------------------------------- There have been several threads lately that have touched on the subject of using synchronous services instead of state models in some situations. The functional or "spider" state model pattern is an example. I have been struggling with this issue for some time, so I would like to gain a better understanding of when synchronous services are appropriate and when they are not. A state model that always does its processing and then generates an "I'm Done" event to return to the Ready state, or one that can transition from any state to any other state, doesn't really model the lifecycle of anything, so I can see the argument that these should just be done as services. However, the issue of whether some processing is part of a lifecycle or not seems to be different than the issue of whether the communication should be synchronous or asynchronous. In other words, even though some processing could be done as a service outside of a state model, it may be important that it be invoked asynchronously. The action that invokes the service may not want or need to wait for it to complete. Would it make sense to have the concept of an Asynchronous Service? Thanks, Bary Hogan Lockheed Martin Tactical Aircraft Systems Fort Worth, TX bary.d.hogan@lmco.com (817) 763-2620 P.S. In case anyone didn't know, the presentations from the US SMUG Conference are available at Project Technology's web site (http://www.projtech.com/pubs/confs.html or go to Publications -> Conference Papers). Subject: (SMU) Re: Hold event after transistion and Mealy V Moore "Leslie A. Munday" writes to shlaer-mellor-users: -------------------------------------------------------------------- This is a multi-part message in MIME format. You need a MIME compliant mail reader to completely decode it. --=_-=_-DKOKENBLGELDAAAA Content-Type: text/plain; charset=us-ascii Content-Language: en Content-Length: 6432 Content-Transfer-Encoding: 7bit I've added this mesage as a textual attachment, since it has text pictures, which don't work on my silly little mail application. On Mon, 29 Nov 1999 09:02:16 lahman wrote: >lahman writes to shlaer-mellor-users: >-------------------------------------------------------------------- Belated comments interspersed below: > >Responding to Munday... > >> The result of a model is a complete set of domains, objects, states, relationships, data flows, events and operations. >> >> Of these it is the operations which make up my requirements. Everything else is supporting information. >> >> States - indicate the precondition and postcondition of my requirement. >> Events - Are the stimulus for the requirement. >> DataFlows - Show external data used by the requirement. >> Domains, Objects and relationships are used to show the partitioning of the problem. >> >> Now any functionality which can be measured over time, forms a requirement of my system. Since events are not themselves a requirement, they cannot have time associated with them, else I'll be missing information when I come to write up my requirements. > >If I understand this correctly, then your 'operation' is an analog to a state model action but it is a quite different thing. (More below.) Yes. > >This seems to me to be a very different view of time than that of an S-M OOA. An S-M OOA captures time in the sequencing of events between instances so the correctness of an OOA depends hugely on correctly accounting for event delays. In your usage event transitions simply link post-conditions to >pre-conditions for the state of the system to determine the context of a particular requirement while time seems to be only relevant if it is exlicitly identified in a requirement description. > Agreed >> Also the fact that events are queued and prioritized will also have some impact on the requirements, and I don't know how to relate this information to a requirement of the form, Precondition->Stimulus->Action->Postcondition. >> >> It's all (mostly) explained in my short paper on requirements analysis. >> >> I think that the big difference between what I propose and what the S-M tools support is the difference between single-threaded and multi-threaded modeling. >> >> When I model, every instance is given its own thread of control. Events fly instantaneously between object instances and the receiving instances react immediately to the reception of an event. >> >> No queueing or prioritizing of events is necessary. > >Because instances have persistent data their actions cannot be re-entrant if those actions update the instance data. Since an OOA almost always has some flow of control (i.e., event generation) based upon current values of attributes, an individual action could not accept a second event while it was >processing, so one would need event queuing at least at this level. > Not sure what this statement means, (perhaps an example would help), but even so I have to disagree since, yes events are generated based upon attribute values and no, I have no need to queue events. An action could not accept a second event, since it would be interrupted by the first event and terminate. >So this leads me to think that your view of an action as a requirement is quite literal. That is, it is a just a description of what must be done rather than an execution (albeit abstract) of what must be done. That is, you are using the models for a very literal minded requirements analysis (i.e., an >organization of the statement of requirements) rather than an OOA (i.e., an abstract solution to the problem). > Yes, actions are written in English, or preferably using ADFDs (I don't have access to any tools that support ADFDs). >> Now if I tried this (which I am in the process of doing) with a tool that supports only a single thread of control. I.e. when an instance sends an event to another instance, the current instance becomes paused, and the receiving instance takes over the control of the system, things get rather complicated. > >I would think so, since this is a _very_ synchronous architecture and S-M was designed for asynchronous analysis. B-) > >However, I don't think a more complex architecture would solve your problem. It seems to me that you have a different view of time, the nature of state model actions, and flow of control than S-M is geared to do. Baisically you are solving a different problem than that > What it comes down to is that to test the analysis model I need to execute the state diagrams, as a minimum. (It would be nice to execute the actions, but I'm not convinced that this is necessary. When you have events based upon the values of attributes, as mentioned earlier, then the execution can be paused and appropriate values set manually.) S-M tools are about the best for testing my analysis models, but I do find it necessary to make design decisions when doing the translation. Also S-M tools allow me to automatically generate an OCD, hence give me a static check of the event processing in my model. To re-iterate: A requirement is of form - This relates to ------- Precondition - State A ------- | Stimulus - Event/ Description - Action | \ / v ------- Postcondition - State B ------- Do this for every requirement in your system and then join them together to make a complete model of your system (in theory anyway). This leads me on to the thread about Mealy versus Moore. As you can see, I prefer Mealy, always have. As to why Moore uses more (pun intended) states and has greater numberof self-directed events could be because: If I take the above scenario and convert it to Moore instaead of Mealy: A requirement is of form - This relates to ------- Precondition - State A ------- | Stimulus - Event | \ / v ------- Description - Action ------- | Self-Directed-Event | \ / v ------- Postcondition - State B ------- do I not get something like the above? Leslie. __________________________________________________________________ Get your own free England E-mail address at http://www.england.com --=_-=_-DKOKENBLGELDAAAA Content-Type: text/plain; charset=us-ascii; name="text.txt" Content-Language: en Content-Length: 8335 Content-Transfer-Encoding: base64 T24gTW9uLCAyOSBOb3YgMTk5OSAwOTowMjoxNiAgIGxhaG1hbiB3cm90ZToNCj5sYWhtYW4g PGxhaG1hbkBhdGIudGVyYWR5bmUuY29tPiB3cml0ZXMgdG8gc2hsYWVyLW1lbGxvci11c2Vy czoNCj4tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLQ0KDQpCZWxhdGVkIGNvbW1lbnRzIGludGVyc3BlcnNlZCBi ZWxvdzoNCg0KPg0KPlJlc3BvbmRpbmcgdG8gTXVuZGF5Li4uDQo+DQo+PiBUaGUgcmVzdWx0 IG9mIGEgbW9kZWwgaXMgYSBjb21wbGV0ZSBzZXQgb2YgZG9tYWlucywgb2JqZWN0cywgc3Rh dGVzLCByZWxhdGlvbnNoaXBzLCBkYXRhIGZsb3dzLCBldmVudHMgYW5kIG9wZXJhdGlvbnMu DQo+Pg0KPj4gT2YgdGhlc2UgaXQgaXMgdGhlIG9wZXJhdGlvbnMgd2hpY2ggbWFrZSB1cCBt eSByZXF1aXJlbWVudHMuIEV2ZXJ5dGhpbmcgZWxzZSBpcyBzdXBwb3J0aW5nIGluZm9ybWF0 aW9uLg0KPj4NCj4+IFN0YXRlcyAtIGluZGljYXRlIHRoZSBwcmVjb25kaXRpb24gYW5kIHBv c3Rjb25kaXRpb24gb2YgbXkgcmVxdWlyZW1lbnQuDQo+PiBFdmVudHMgLSBBcmUgdGhlIHN0 aW11bHVzIGZvciB0aGUgcmVxdWlyZW1lbnQuDQo+PiBEYXRhRmxvd3MgLSBTaG93IGV4dGVy bmFsIGRhdGEgdXNlZCBieSB0aGUgcmVxdWlyZW1lbnQuDQo+PiBEb21haW5zLCBPYmplY3Rz IGFuZCByZWxhdGlvbnNoaXBzIGFyZSB1c2VkIHRvIHNob3cgdGhlIHBhcnRpdGlvbmluZyBv ZiB0aGUgcHJvYmxlbS4NCj4+DQo+PiBOb3cgYW55IGZ1bmN0aW9uYWxpdHkgd2hpY2ggY2Fu IGJlIG1lYXN1cmVkIG92ZXIgdGltZSwgZm9ybXMgYSByZXF1aXJlbWVudCBvZiBteSBzeXN0 ZW0uIFNpbmNlIGV2ZW50cyBhcmUgbm90IHRoZW1zZWx2ZXMgYSByZXF1aXJlbWVudCwgdGhl eSBjYW5ub3QgaGF2ZSB0aW1lIGFzc29jaWF0ZWQgd2l0aCB0aGVtLCBlbHNlIEknbGwgYmUg bWlzc2luZyBpbmZvcm1hdGlvbiB3aGVuIEkgY29tZSB0byB3cml0ZSB1cCBteSByZXF1aXJl bWVudHMuDQo+DQo+SWYgSSB1bmRlcnN0YW5kIHRoaXMgY29ycmVjdGx5LCB0aGVuIHlvdXIg J29wZXJhdGlvbicgaXMgYW4gYW5hbG9nIHRvIGEgc3RhdGUgbW9kZWwgYWN0aW9uIGJ1dCBp dCBpcyBhIHF1aXRlIGRpZmZlcmVudCB0aGluZy4gIChNb3JlIGJlbG93LikNCg0KWWVzLg0K DQo+DQo+VGhpcyBzZWVtcyB0byBtZSB0byBiZSBhIHZlcnkgZGlmZmVyZW50IHZpZXcgb2Yg dGltZSB0aGFuIHRoYXQgb2YgYW4gUy1NIE9PQS4gIEFuIFMtTSBPT0EgY2FwdHVyZXMgdGlt ZSBpbiB0aGUgc2VxdWVuY2luZyBvZiBldmVudHMgYmV0d2VlbiBpbnN0YW5jZXMgc28gdGhl IGNvcnJlY3RuZXNzIG9mIGFuIE9PQSBkZXBlbmRzIGh1Z2VseSBvbiBjb3JyZWN0bHkgYWNj b3VudGluZyBmb3IgZXZlbnQgZGVsYXlzLiAgSW4geW91ciB1c2FnZSBldmVudCB0cmFuc2l0 aW9ucyBzaW1wbHkgbGluayBwb3N0LWNvbmRpdGlvbnMgdG8NCj5wcmUtY29uZGl0aW9ucyBm b3IgdGhlIHN0YXRlIG9mIHRoZSBzeXN0ZW0gdG8gZGV0ZXJtaW5lIHRoZSBjb250ZXh0IG9m IGEgcGFydGljdWxhciByZXF1aXJlbWVudCB3aGlsZSB0aW1lIHNlZW1zIHRvIGJlIG9ubHkg cmVsZXZhbnQgaWYgaXQgaXMgZXhsaWNpdGx5IGlkZW50aWZpZWQgaW4gYSByZXF1aXJlbWVu dCBkZXNjcmlwdGlvbi4NCj4NCg0KQWdyZWVkDQoNCj4+IEFsc28gdGhlIGZhY3QgdGhhdCBl dmVudHMgYXJlIHF1ZXVlZCBhbmQgcHJpb3JpdGl6ZWQgd2lsbCBhbHNvIGhhdmUgc29tZSBp bXBhY3Qgb24gdGhlIHJlcXVpcmVtZW50cywgYW5kIEkgZG9uJ3Qga25vdyBob3cgdG8gcmVs YXRlIHRoaXMgaW5mb3JtYXRpb24gdG8gYSByZXF1aXJlbWVudCBvZiB0aGUgZm9ybSwgUHJl Y29uZGl0aW9uLT5TdGltdWx1cy0+QWN0aW9uLT5Qb3N0Y29uZGl0aW9uLg0KPj4NCj4+IEl0 J3MgYWxsIChtb3N0bHkpIGV4cGxhaW5lZCBpbiBteSBzaG9ydCBwYXBlciBvbiByZXF1aXJl bWVudHMgYW5hbHlzaXMuDQo+Pg0KPj4gSSB0aGluayB0aGF0IHRoZSBiaWcgZGlmZmVyZW5j ZSBiZXR3ZWVuIHdoYXQgSSBwcm9wb3NlIGFuZCB3aGF0IHRoZSBTLU0gdG9vbHMgc3VwcG9y dCBpcyB0aGUgZGlmZmVyZW5jZSBiZXR3ZWVuIHNpbmdsZS10aHJlYWRlZCBhbmQgbXVsdGkt dGhyZWFkZWQgbW9kZWxpbmcuDQo+Pg0KPj4gV2hlbiBJIG1vZGVsLCBldmVyeSBpbnN0YW5j ZSBpcyBnaXZlbiBpdHMgb3duIHRocmVhZCBvZiBjb250cm9sLiBFdmVudHMgZmx5IGluc3Rh bnRhbmVvdXNseSBiZXR3ZWVuIG9iamVjdCBpbnN0YW5jZXMgYW5kIHRoZSByZWNlaXZpbmcg aW5zdGFuY2VzIHJlYWN0IGltbWVkaWF0ZWx5IHRvIHRoZSByZWNlcHRpb24gb2YgYW4gZXZl bnQuDQo+Pg0KPj4gTm8gcXVldWVpbmcgb3IgcHJpb3JpdGl6aW5nIG9mIGV2ZW50cyBpcyBu ZWNlc3NhcnkuDQo+DQo+QmVjYXVzZSBpbnN0YW5jZXMgaGF2ZSBwZXJzaXN0ZW50IGRhdGEg dGhlaXIgYWN0aW9ucyBjYW5ub3QgYmUgcmUtZW50cmFudCBpZiB0aG9zZSBhY3Rpb25zIHVw ZGF0ZSB0aGUgaW5zdGFuY2UgZGF0YS4gIFNpbmNlIGFuIE9PQSBhbG1vc3QgYWx3YXlzIGhh cyBzb21lIGZsb3cgb2YgY29udHJvbCAoaS5lLiwgZXZlbnQgZ2VuZXJhdGlvbikgYmFzZWQg dXBvbiBjdXJyZW50IHZhbHVlcyBvZiBhdHRyaWJ1dGVzLCBhbiBpbmRpdmlkdWFsIGFjdGlv biBjb3VsZCBub3QgYWNjZXB0IGEgc2Vjb25kIGV2ZW50IHdoaWxlIGl0IHdhcw0KPnByb2Nl c3NpbmcsIHNvIG9uZSB3b3VsZCBuZWVkIGV2ZW50IHF1ZXVpbmcgYXQgbGVhc3QgYXQgdGhp cyBsZXZlbC4NCj4NCg0KTm90IHN1cmUgd2hhdCB0aGlzIHN0YXRlbWVudCBtZWFucywgKHBl cmhhcHMgYW4gZXhhbXBsZSB3b3VsZCBoZWxwKSwgYnV0IGV2ZW4gc28gSSBoYXZlIHRvIGRp c2FncmVlIHNpbmNlLCB5ZXMgZXZlbnRzIGFyZSBnZW5lcmF0ZWQgYmFzZWQgdXBvbiBhdHRy aWJ1dGUgdmFsdWVzIGFuZCBubywgSSBoYXZlIG5vIG5lZWQgdG8gcXVldWUgZXZlbnRzLiAN Cg0KQW4gYWN0aW9uIGNvdWxkIG5vdCBhY2NlcHQgYSBzZWNvbmQgZXZlbnQsIHNpbmNlIGl0 IHdvdWxkIGJlIGludGVycnVwdGVkIGJ5IHRoZSBmaXJzdCBldmVudCBhbmQgdGVybWluYXRl Lg0KDQo+U28gdGhpcyBsZWFkcyBtZSB0byB0aGluayB0aGF0IHlvdXIgdmlldyBvZiBhbiBh Y3Rpb24gYXMgYSByZXF1aXJlbWVudCBpcyBxdWl0ZSBsaXRlcmFsLiAgVGhhdCBpcywgaXQg aXMgYSBqdXN0IGEgZGVzY3JpcHRpb24gb2Ygd2hhdCBtdXN0IGJlIGRvbmUgcmF0aGVyIHRo YW4gYW4gZXhlY3V0aW9uIChhbGJlaXQgYWJzdHJhY3QpIG9mIHdoYXQgbXVzdCBiZSBkb25l LiAgVGhhdCBpcywgeW91IGFyZSB1c2luZyB0aGUgbW9kZWxzIGZvciBhIHZlcnkgbGl0ZXJh bCBtaW5kZWQgcmVxdWlyZW1lbnRzIGFuYWx5c2lzIChpLmUuLCBhbg0KPm9yZ2FuaXphdGlv biBvZiB0aGUgc3RhdGVtZW50IG9mIHJlcXVpcmVtZW50cykgcmF0aGVyIHRoYW4gYW4gT09B IChpLmUuLCBhbiBhYnN0cmFjdCBzb2x1dGlvbiB0byB0aGUgcHJvYmxlbSkuDQo+DQoNClll cywgYWN0aW9ucyBhcmUgd3JpdHRlbiBpbiBFbmdsaXNoLCBvciBwcmVmZXJhYmx5IHVzaW5n IEFERkRzIChJIGRvbid0IGhhdmUgYWNjZXNzIHRvIGFueSB0b29scyB0aGF0IHN1cHBvcnQg QURGRHMpLg0KDQo+PiBOb3cgaWYgSSB0cmllZCB0aGlzICh3aGljaCBJIGFtIGluIHRoZSBw cm9jZXNzIG9mIGRvaW5nKSB3aXRoIGEgdG9vbCB0aGF0IHN1cHBvcnRzIG9ubHkgYSBzaW5n bGUgdGhyZWFkIG9mIGNvbnRyb2wuIEkuZS4gd2hlbiBhbiBpbnN0YW5jZSBzZW5kcyBhbiBl dmVudCB0byBhbm90aGVyIGluc3RhbmNlLCB0aGUgY3VycmVudCBpbnN0YW5jZSBiZWNvbWVz IHBhdXNlZCwgYW5kIHRoZSByZWNlaXZpbmcgaW5zdGFuY2UgdGFrZXMgb3ZlciB0aGUgY29u dHJvbCBvZiB0aGUgc3lzdGVtLCB0aGluZ3MgZ2V0IHJhdGhlciBjb21wbGljYXRlZC4NCj4N Cj5JIHdvdWxkIHRoaW5rIHNvLCBzaW5jZSB0aGlzIGlzIGEgX3ZlcnlfIHN5bmNocm9ub3Vz IGFyY2hpdGVjdHVyZSBhbmQgUy1NIHdhcyBkZXNpZ25lZCBmb3IgYXN5bmNocm9ub3VzIGFu YWx5c2lzLiAgQi0pDQo+DQo+SG93ZXZlciwgSSBkb24ndCB0aGluayBhIG1vcmUgY29tcGxl eCBhcmNoaXRlY3R1cmUgd291bGQgc29sdmUgeW91ciBwcm9ibGVtLiAgSXQgc2VlbXMgdG8g bWUgdGhhdCB5b3UgaGF2ZSBhIGRpZmZlcmVudCB2aWV3IG9mIHRpbWUsIHRoZSBuYXR1cmUg b2Ygc3RhdGUgbW9kZWwgYWN0aW9ucywgYW5kIGZsb3cgb2YgY29udHJvbCB0aGFuIFMtTSBp cyBnZWFyZWQgdG8gZG8uICAgQmFpc2ljYWxseSB5b3UgYXJlIHNvbHZpbmcgYSBkaWZmZXJl bnQgcHJvYmxlbSB0aGFuIHRoYXQNCj4NCg0KV2hhdCBpdCBjb21lcyBkb3duIHRvIGlzIHRo YXQgdG8gdGVzdCB0aGUgYW5hbHlzaXMgbW9kZWwgSSBuZWVkIHRvIGV4ZWN1dGUgdGhlIHN0 YXRlIGRpYWdyYW1zLCBhcyBhIG1pbmltdW0uIChJdCB3b3VsZCBiZSBuaWNlIHRvIGV4ZWN1 dGUgdGhlIGFjdGlvbnMsIGJ1dCBJJ20gbm90IGNvbnZpbmNlZCB0aGF0IHRoaXMgaXMgbmVj ZXNzYXJ5LiBXaGVuIHlvdSBoYXZlIGV2ZW50cyBiYXNlZCB1cG9uIHRoZSB2YWx1ZXMgb2Yg YXR0cmlidXRlcywgYXMgbWVudGlvbmVkIGVhcmxpZXIsIHRoZW4gdGhlIGV4ZWN1dGlvbiBj YW4gYmUgcGF1c2VkIGFuZCBhcHByb3ByaWF0ZSB2YWx1ZXMgc2V0IG1hbnVhbGx5LikNCg0K Uy1NIHRvb2xzIGFyZSBhYm91dCB0aGUgYmVzdCBmb3IgdGVzdGluZyBteSBhbmFseXNpcyBt b2RlbHMsIGJ1dCBJIGRvIGZpbmQgaXQgbmVjZXNzYXJ5IHRvIG1ha2UgZGVzaWduIGRlY2lz aW9ucyB3aGVuIGRvaW5nIHRoZSB0cmFuc2xhdGlvbi4NCg0KQWxzbyBTLU0gdG9vbHMgYWxs b3cgbWUgdG8gYXV0b21hdGljYWxseSBnZW5lcmF0ZSBhbiBPQ0QsIGhlbmNlIGdpdmUgbWUg YSBzdGF0aWMgY2hlY2sgb2YgdGhlIGV2ZW50IHByb2Nlc3NpbmcgaW4gbXkgbW9kZWwuDQoN ClRvIHJlLWl0ZXJhdGU6DQoNCkEgcmVxdWlyZW1lbnQgaXMgb2YgZm9ybQktCSAgICBUaGlz IHJlbGF0ZXMgdG8JDQoNCgkJCQkJCS0tLS0tLS0NClByZWNvbmRpdGlvbiAJCQktCQlTdGF0 ZSBBDQoJCQkJCQktLS0tLS0tDQoJCQkJCQkgICB8DQpTdGltdWx1cyAJCQktCQkgRXZlbnQv DQpEZXNjcmlwdGlvbgkJCS0JCSBBY3Rpb24NCgkJCQkJCSAgIHwNCgkJCQkJCSAgXCAvDQoJ CQkJCQkgICB2DQoJCQkJCQktLS0tLS0tDQpQb3N0Y29uZGl0aW9uIAkJCS0JCVN0YXRlIEIN CgkJCQkJCS0tLS0tLS0NCg0KRG8gdGhpcyBmb3IgZXZlcnkgcmVxdWlyZW1lbnQgaW4geW91 ciBzeXN0ZW0gYW5kIHRoZW4gam9pbiB0aGVtIHRvZ2V0aGVyIHRvIG1ha2UgYSBjb21wbGV0 ZSBtb2RlbCBvZiB5b3VyIHN5c3RlbSAoaW4gdGhlb3J5IGFueXdheSkuDQoNClRoaXMgbGVh ZHMgbWUgb24gdG8gdGhlIHRocmVhZCBhYm91dCBNZWFseSB2ZXJzdXMgTW9vcmUuDQoNCkFz IHlvdSBjYW4gc2VlLCBJIHByZWZlciBNZWFseSwgYWx3YXlzIGhhdmUuIEFzIHRvIHdoeSBN b29yZSB1c2VzIG1vcmUgKHB1biBpbnRlbmRlZCkgc3RhdGVzIGFuZCBoYXMgZ3JlYXRlciBu dW1iZXJvZiBzZWxmLWRpcmVjdGVkIGV2ZW50cyBjb3VsZCBiZSBiZWNhdXNlOg0KDQpJZiBJ IHRha2UgdGhlIGFib3ZlIHNjZW5hcmlvIGFuZCBjb252ZXJ0IGl0IHRvIE1vb3JlIGluc3Rh ZWFkIG9mIE1lYWx5Og0KDQpBIHJlcXVpcmVtZW50IGlzIG9mIGZvcm0JLQkgICAgVGhpcyBy ZWxhdGVzIHRvCQ0KDQoJCQkJCQktLS0tLS0tDQpQcmVjb25kaXRpb24gCQkJLQkJU3RhdGUg QQ0KCQkJCQkJLS0tLS0tLQ0KCQkJCQkJICAgfA0KU3RpbXVsdXMgCQkJLQkJIEV2ZW50DQoJ CQkJCQkgICB8DQoJCQkJCQkgIFwgLw0KCQkJCQkJICAgdg0KCQkJCQkJLS0tLS0tLQ0KRGVz Y3JpcHRpb24JCQktCQkgQWN0aW9uDQoJCQkJCQktLS0tLS0tDQoJCQkJCQkgICB8DQoJCQkJ CVNlbGYtRGlyZWN0ZWQtRXZlbnQNCgkJCQkJCSAgIHwNCgkJCQkJCSAgXCAvDQoJCQkJCQkg ICB2DQoJCQkJCQktLS0tLS0tDQpQb3N0Y29uZGl0aW9uIAkJCS0JCVN0YXRlIEINCgkJCQkJ CS0tLS0tLS0NCg0KZG8gSSBub3QgZ2V0IHNvbWV0aGluZyBsaWtlIHRoZSBhYm92ZT8NCg0K TGVzbGllLg== --=_-=_-DKOKENBLGELDAAAA-- Subject: RE: (SMU) is a self directed event special (was: Anonymous event David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > But then you are doing both Mealy and Moore. I could see picking > the model individually for particular state models, but not > combining them. When you suggested the action-on-transition, I > assumed you were simply choosing Mealy to handle a particular > model. I see no reason to constrain myself to either, or even both. Though each is mathematically complete, in terms of abstraction both are inadequate. There are concepts which cannot be modelled without violating one-fact-in-one-place. For example, to emulate an action-on-exit in Mealy, you must duplicate the action onto every outgoing transition. To do this in Moore, you must first implement action-on-transition. This requires a transitent state. Finally, if you want to implement action-on-entry in Mealy then you also need a transient state. Moore and Mealy are fine in the context of an implementation but they do not lead to normalised descriptions of behaviour. > My argument is that the knowledge justifying generating the > event triggering the Fetched -> Ready transition is > intrinsic to the model so the event should be explicitly > generated there rather than in some other model. It seems to me that the "Fetched" state is simply an artifact of the description. Instead of "initiated" and "fetched", have a single state named "Fetching". The entry into that state does the work of the "initiated" state. The exit from it does the work of the "Fetched" state. Logically, the event that causes the exit from "Fetching" causes a transition directly back to "Ready": to model it in a Moore notation requires a transient state; and a transient state in SM requires a self-directed event. > In the case of the Bias -> Ready event, another > object had that knowledge. But the transition from Bias to Ready is surely not triggered by same event. The end of a fetch and the end of a Bias seem like different situations. Even if they are the same, then the event is not self directed in either case: one causes Bias->Ready; the other causes Fetching->Ready. > The S1 -> S2 transition is not optional -- it is the default. > S1 merely places the data for the default state on the event > (usually just some flag value settings). So S1 a transient state that happens to modify the data associated with its transition. Well, that's an interesting variation; the simple transient state is a special case of this. > > > But somebody else might want a non-default state, sending > > > their own -> S2 event. > > So you are saying that someone, outside the object, might > > send an event with the specific intent of placing the > > object in a specific state. That set off so many alarm > > bells that I'm going deaf! > Not the state machine state. The state of attributes or, in > this case, the hardware. The phrase "-> S2 event" implies that the event is placing the state machine in the S2 state. > Hey, using relationship navigation that depended upon > attribute data was your mechanism! B-) I'm just > extending it for subtype create events to look at > event data instead while navigating an is-a to a class. Actually I haven't suggested a mechanism yet. I simply said that its something that people want to do. > I don't see it as that elaborate. If you are already > allowing the analyst to specify 'red' for an attribute > of X as a criteria for navigation, why not allow the > analyst to specify 'red' for an event data packet member? > It's still "when X.name == 'red'" in either case; X just > becomes and object identifier in one case and an event > identifier in the other. The problem that I see is to work out where to assign responsibility for the filter, and for the filter's data. Currently its easy: the sending action does everything: the navigation, the filter and its data. I have proposed that the navigation can be moved onto the OCM. This still leaves the filter and its data. Its possible that the filter(s) could also be moved to the OCM (and every link in a chain of navigations may have its own filter). But what about the data. This could go in the front part of the event (i.e. replace the current destination-id part with a the filter-data). A more radical alternative is to link the OCM with a modified OAM (my previous domain-DFD proposal) and insert the data automagically. This way, the event generator process remains a pure statement that "X has happenned" with no concept of a receiver, nor how to find a receiver. Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: Re: (SMU) Synchronous vs. Asynchronous behavior lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Hogan... > There have been several threads lately that have touched on the subject > of using synchronous services instead of state models in some > situations. The functional or "spider" state model pattern is an > example. > > I have been struggling with this issue for some time, so I would like > to gain a better understanding of when synchronous services are > appropriate and when they are not. My view of a synchronous service is that it is a transform process. One can use a transform process for any activity that does not generate an event to another object. That is, it does not directly affect the flow of control at the domain level. Thus it is limited to synchronous or 'realized' processing at the instance level. I think the root criteria, though, for using synchronous services lies in the abstraction and mission of the domain. The nature of the domain determines the appropriate level of flow of control in that domain. Activities that are peripheral to the domain mission or are at a lower level of abstraction can be captured in synchronous services. In particular, one view of transforms is that they are a synchronous wormhole to a realized service domain that has the appropriate level of abstraction and mission. > Would it make sense to have the concept of an Asynchronous Service? If one subscribes to the notion of a transform being a synchronous wormhole to another domain, then it can be implemented as a form of bridge. That would allow the service domain to be internally asynchronous. In fact, that is exactly the way our device driver works. Though it is implemented asynchronously with state machines, it presents to the outside world a pure C API that is synchronous. Any client domain can invoke an API function as a synchronous wormhole -- it just won't return until the device driver has competed doing its thing. At the level of the client domain I don't think an Asynchronous Service would be a good idea. If it generates or receives events within the domain, then that processing should be captured at the state model level rather than hidden in the service. If, however, you intent is that it can simply be accessed asynchronously (i.e., whenever a bridge or action needs the service), then I think that is probably viable. I am rather nervous about it because it seems to sidestep the state model paradigm so central to S-M. OTOH, I haven't seen any crippling effects as a result of using them in this manner. [As it happens our CASE tool implements both domain level and object level SSes that can be invoked arbitrarily.] -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Re: Hold event after transistion and Mealy V Moore lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Munday... > >Because instances have persistent data their actions cannot be re-entrant if those actions update the instance data. Since an OOA almost always has some flow of control (i.e., event generation) based upon current values of attributes, an individual action could not accept a second event while it was > >processing, so one would need event queuing at least at this level. > > > > Not sure what this statement means, (perhaps an example would help), but even so I have to disagree since, yes events are generated based upon attribute values and no, I have no need to queue events. I have an instance with attributes X and Y. An action has four steps (1) Compute X + Y (2) Generate an event with that sum as data (3) Add 1 to X (4) Multiply Y by 2. If the action is invoked the processing could have completed (3) but not (4) when the action is invoked a second time and (1) is computed. In that case it would be using inconsistent values of X and Y for the event data. Or step (2) could be "Generate event E1 if the sum is divisible by 3", again an inconsistent event generation > An action could not accept a second event, since it would be interrupted by the first event and terminate. This would be even worse. If the first action is terminated after step (3) by the second event the data would be irreparably inconsistent. But I think this is all academic because your actions are not executable in the same sense that S-M actions are. > As you can see, I prefer Mealy, always have. As to why Moore uses more (pun intended) states and has greater numberof self-directed events could be because: > > If I take the above scenario and convert it to Moore instaead of Mealy: > > A requirement is of form - This relates to > > ------- > Precondition - State A > ------- > | > Stimulus - Event > | > \ / > v > ------- > Description - Action > ------- > | > Self-Directed-Event > | > \ / > v > ------- > Postcondition - State B > ------- > > do I not get something like the above? Why? Since your event/action is different than that used by S-M, I don't see that it makes a difference. You have converted the S-M state models from a dynamic description of processing to a static description of possible system states and the paths among them, so the nuances of Mealy vs. Moore don't seem terribly relevant. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) is a self directed event special (was: Anonymous event lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > I see no reason to constrain myself to either, or even both. > Though each is mathematically complete, in terms of abstraction > both are inadequate. There are concepts which cannot be modelled > without violating one-fact-in-one-place. > > For example, to emulate an action-on-exit in Mealy, you must > duplicate the action onto every outgoing transition. To do > this in Moore, you must first implement action-on-transition. > This requires a transitent state. Finally, if you want to > implement action-on-entry in Mealy then you also need a > transient state. > > Moore and Mealy are fine in the context of an implementation > but they do not lead to normalised descriptions of behaviour. What little I knew about finite state machine theory has long since seeped out through my toes, so I can't come up with a substantive reason why it is not a good idea to mix approaches in the same model. But I'd bet a modest sum there was one. B-) > > My argument is that the knowledge justifying generating the > > event triggering the Fetched -> Ready transition is > > intrinsic to the model so the event should be explicitly > > generated there rather than in some other model. > > It seems to me that the "Fetched" state is simply an > artifact of the description. Instead of "initiated" > and "fetched", have a single state named "Fetching". > The entry into that state does the work of the > "initiated" state. Can't do that. Separate hardware activities are associated with each state and no channel can be fetched until all have been initiated (i.e., somebody else who knows when all channels have been initiated must send the event to each relevant channel). [Actually 'Loaded' would have been a better name -- but SetUp, Initiate, Fetch is consistent with the nomenclature of the VXI Plug&Play specification that is fundamental requirement in the problem space.] > The exit from it does the work > of the "Fetched" state. Logically, the event that > causes the exit from "Fetching" causes a transition > directly back to "Ready": to model it in a Moore > notation requires a transient state; and a transient > state in SM requires a self-directed event. Yes, but the action associated with Ready is the same one that is associated with the return from Biased. So the same action on Fetch -> Ready would be a superset that included the same activity as on Bias -> Ready. > > In the case of the Bias -> Ready event, another > > object had that knowledge. > > But the transition from Bias to Ready is surely not > triggered by same event. The end of a fetch and the end > of a Bias seem like different situations. Even if they > are the same, then the event is not self directed in > either case: one causes Bias->Ready; the other causes > Fetching->Ready. Exactly. They are two different situations and two different events. But the action associated with -> Ready is the same. > > The S1 -> S2 transition is not optional -- it is the default. > > S1 merely places the data for the default state on the event > > (usually just some flag value settings). > > So S1 a transient state that happens to modify the data > associated with its transition. Well, that's an interesting > variation; the simple transient state is a special case > of this. No more transient than any other state. S1 has its own activity associated with it. For example it might be a state indicating that cleanup activities have been completed. Part of those responsibilities could be to ensure the hardware is returned to a particular (default) known state. But S2 already handles setting the hardware to a given state, so -> S2 is used to avoid duplication. > > > > But somebody else might want a non-default state, sending > > > > their own -> S2 event. > > > > So you are saying that someone, outside the object, might > > > send an event with the specific intent of placing the > > > object in a specific state. That set off so many alarm > > > bells that I'm going deaf! > > > Not the state machine state. The state of attributes or, in > > this case, the hardware. > > The phrase "-> S2 event" implies that the event is placing > the state machine in the S2 state. This is true. But it is not true that someone outside the state machine is trying to put it in a specific state. That event could be simply announcing that it is time to place the hardware in a non-default state. The analyst decides that the S2 transition can deal with that. > > I don't see it as that elaborate. If you are already > > allowing the analyst to specify 'red' for an attribute > > of X as a criteria for navigation, why not allow the > > analyst to specify 'red' for an event data packet member? > > It's still "when X.name == 'red'" in either case; X just > > becomes and object identifier in one case and an event > > identifier in the other. > > > > I have proposed that the navigation can be moved onto > the OCM. This still leaves the filter and its data. > > Its possible that the filter(s) could also be moved to > the OCM (and every link in a chain of navigations may > have its own filter). But what about the data. This > could go in the front part of the event (i.e. replace > the current destination-id part with a the filter-data). > A more radical alternative is to link the OCM with > a modified OAM (my previous domain-DFD proposal) and > insert the data automagically. This way, the event > generator process remains a pure statement that "X has > happenned" with no concept of a receiver, nor how to > find a receiver. I had assumed some similar mechanism whereby an explicit OOA notation to define the navigation and filter would be provided and linked to the event in the OCM. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Synchronous vs. Asynchronous behavior Bary D Hogan writes to shlaer-mellor-users: -------------------------------------------------------------------- > > lahman writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Responding to Hogan... > > > There have been several threads lately that have touched on the subject > > of using synchronous services instead of state models in some > > situations. The functional or "spider" state model pattern is an > > example. > > > > I have been struggling with this issue for some time, so I would like > > to gain a better understanding of when synchronous services are > > appropriate and when they are not. > > My view of a synchronous service is that it is a transform process. One > can use a transform process for any activity that does not generate an > event to another object. That is, it does not directly affect the flow of > control at the domain level. Thus it is limited to synchronous or > 'realized' processing at the instance level. I don't share your view of synchronous services in that I don't view synchronous services as being at any lower level of abstraction than state actions. I think synchronous services should be able to do anything an action can do including generating events. [I'm not sure what the "official" view is here. Is there one?] I could try to go into all of the different situations in which I think synchronous services are useful and why in an attempt to validate my argument, but that would take a while. Maybe on a later thread. I was hoping this thread would focus on the issue of synchronous vs. asynchronous communication. If you accept the theory that a state model that exhibits a bad pattern could be modeled as services instead of a state model, then you are potentially changing asynchronous communication (events) to synchronous communication (service calls). It seems like it may be better to keep the bad state model and preserve the asynchronous communication rather than use synchronous communication. Or, if there is a way to specify that a service call is asynchronous, then maybe you haven't sacrificed anything (as far as keeping the communication asynchronous). > > I think the root criteria, though, for using synchronous services lies in > the abstraction and mission of the domain. The nature of the domain > determines the appropriate level of flow of control in that domain. > Activities that are peripheral to the domain mission or are at a lower > level of abstraction can be captured in synchronous services. In > particular, one view of transforms is that they are a synchronous wormhole > to a realized service domain that has the appropriate level of abstraction > and mission. > > > Would it make sense to have the concept of an Asynchronous Service? > > If one subscribes to the notion of a transform being a synchronous wormhole > to another domain, then it can be implemented as a form of bridge. That > would allow the service domain to be internally asynchronous. In fact, > that is exactly the way our device driver works. Though it is implemented > asynchronously with state machines, it presents to the outside world a pure > C API that is synchronous. Any client domain can invoke an API function as > a synchronous wormhole -- it just won't return until the device driver has > competed doing its thing. > > At the level of the client domain I don't think an Asynchronous Service > would be a good idea. If it generates or receives events within the > domain, then that processing should be captured at the state model level > rather than hidden in the service. > > If, however, you intent is that it can simply be accessed asynchronously > (i.e., whenever a bridge or action needs the service), then I think that is > probably viable. I am rather nervous about it because it seems to sidestep > the state model paradigm so central to S-M. OTOH, I haven't seen any > crippling effects as a result of using them in this manner. [As it > happens our CASE tool implements both domain level and object level SSes > that can be invoked arbitrarily.] I do agree with you in that I think the state model and events should be the primary form of modeling behavior, but I keep seeing more and more situations in which synchronous services are necessary. If they are needed for the _synchronous_ aspect, then no problem. It's just very convenient sometimes to use them for the _process_ aspect which may or may not be appropriate. Bary Hogan Lockheed Martin Tactical Aircraft Systems Fort Worth, TX bary.d.hogan@lmco.com (817) 763-2620 Subject: RE: (SMU) is a self directed event special (was: Anonymous event David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > What little I knew about finite state machine theory has long since > seeped out through my toes, so I can't come up with a substantive > reason why it is not a good idea to mix approaches in the same > model. But I'd bet a modest sum there was one. B-) For a hardware implementation I there are good reasons. But conceptually? SM uses self-directed events to synthesise action-on-transition. The concept of the mixing is in the problems being modelled but the method forces this to be modelled indirectly. The only danger I can see with providing many options is that you could end up with an eclectic notation, like UML. The introduction of transient states for actions on transitions is not too high, so a specific notation for it does not buy much (unless you can get rid of all self-directed events at the same time). But an explicit "on-exit" action is much more cost-effective because its synthesis is nasty. > Yes, but the action associated with Ready is the same one that is > associated with the return from Biased. So the same action > on Fetch -> Ready would be a superset that included the same > activity as on Bias -> Ready. [...] > Exactly. They are two different situations and two different events. > But the action associated with -> Ready is the same. That's why you put actions on transitions _and_ on entry to states. The common actions go on-entry; the differences go on the transitions. If you stick with either pure Mealy or pure Moore, then you always end up adding transient states. In Moore you add them when you want an action-on-transition; in Mealy you add them when you want a action-on-entry to a state. Synthesis of action-on-exit from a state requires a hideous number of transient states, plus queued expediated self-directed events! > > So S1 a transient state that happens to modify the data > > associated with its transition. Well, that's an interesting > > variation; the simple transient state is a special case > > of this. > > No more transient than any other state. S1 has its own activity > associated with it. Definintion: In the context of SM, a transient state is one that unconditionally generates an expediated self-directed event. Such a state may do any number of other things including generating other events. Hense the transient state can be thought of as an action on a transition. The outgoing self-directed event may contain data that is different to the data on the event that caused the transition into the state. A transient state may generate multiple self-directed events. This allows an action-on-exit to be synthesised with no duplication of the on-exit action, but the cost of doing so is high. It also allows the state to force a sequence of transitions through the state machine. If a self-directed event is not unconditional, then it will be conditional. The condition could be on: 1. attributes in the domain 2. event data 3. explicit wormhole return. The stereotypical use of (1) is when events are implemented in data to bypass the event queue. This use hides itself in many garbs, but is the most common use of conditional self-directed events. "Held" events are often synthesised this way. (2) indicates that 2 events have been merged into one. Its almost certainly a modelling error. There are probably a few pathological examples where it is acceptable. It is interesting to consider this case when thinking about an ideal state model notation from a one-fact-in-one-place perspective. I can't generalize the use of (3). I've not used it. Its possible to mix the conditions: I don't do that either. You don't have to do something just because its possible. Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: RE: (SMU) Synchronous vs. Asynchronous behavior David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Bary D Hogan writes to shlaer-mellor-users: > lahman writes to shlaer-mellor-users: > > My view of a synchronous service is that it is a transform > > process. One can use a transform process for any activity that > > does not generate an event to another object. That is, it does > > not directly affect the flow of control at the domain level. > > Thus it is limited to synchronous or 'realized' processing at > > the instance level. > > I don't share your view of synchronous services in that I > don't view synchronous services as being at any lower level > of abstraction than state actions. I think synchronous > services should be able to do anything an action can do > including generating events. First lets try and be clear what a "synchronous service" is. If its a transform, then it can't do anything: it can read/write attributes, create/delete objects; generate events, invoke other transforms nor invoke other tests. It can always be specified in terms of a function with no side effects that operates on its input dataflows to produce its output dataflows. OTOH, if a synchronous service is something that is defined by an SDFD, then it can do anything that an ADFD can do. (more on this in a moment) > [I'm not sure what the "official" view is here. > Is there one?] In SM (OOA96 + bridges/wormholes paper), an SDFD is the endpoint of a wormhole. They can only be invoked from a bridge. This restriction is important because its really nice to keep stack-based processing out of domains. If an SDFD could be invoked from an ADFD, and an SDFD can do anything that an ADFD can do, then an SDFD could invoke an SDFD -- even itself. Recursion may be divine, but only in a pure context: for example, the defintion of a transform. KC allow synch. services to be used in domains. Their OCM shows both sych and asych "events". As far as I know this is a non-standard, tool-specific, extention. > I could try to go into all of the different situations in > which I think synchronous services are useful and why in > an attempt to validate my argument, but that would take > a while. Maybe on a later thread. That might be useful, because I've never seen a situation where a bit more analysis couldn't eliminate the need. Don't bother to enumerate "all" the situations: the one you feel is most important should suffice. > I was hoping this thread would focus on the issue of > synchronous vs. asynchronous communication. If you > accept the theory that a state model that exhibits > a bad pattern could be modeled as services instead > of a state model, then you are potentially changing > asynchronous communication (events) to synchronous > communication (service calls). > > It seems like it may be better to keep the bad state > model and preserve the asynchronous communication > rather than use synchronous communication. A bad state model usually indicates a bad OIM, which sometimes indicates a bad domain chart. Petal and Star models can sometimes be eliminated by moving the actions into the state model that uses them. Doing this may require you to rework the OIM to avoid redundancy. Even more often, I find that the actions belong in another domain. At this point you often get a lot single-object domains: that's when you rework the domain-chart to get some sensible abstractions. Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: RE: (SMU) is a self directed event special (was: Anonymous event "Steve Tockey" writes to shlaer-mellor-users: -------------------------------------------------------------------- > lahman writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Responding to Whipp... > > > I see no reason to constrain myself to either, or even both. > > Though each is mathematically complete, in terms of abstraction > > both are inadequate. There are concepts which cannot be modelled > > without violating one-fact-in-one-place. > > > > For example, to emulate an action-on-exit in Mealy, you must > > duplicate the action onto every outgoing transition. To do > > this in Moore, you must first implement action-on-transition. > > This requires a transitent state. Finally, if you want to > > implement action-on-entry in Mealy then you also need a > > transient state. > > > > Moore and Mealy are fine in the context of an implementation > > but they do not lead to normalised descriptions of behaviour. > > What little I knew about finite state machine theory has long since > seeped out through my toes, so I can't come up with a > substantive reason > why it is not a good idea to mix approaches in the same > model. But I'd > bet a modest sum there was one. B-) Just a moment for me to pop out of net.lurker mode. While most of FSM theory has seeped out of my head too, I do remember a bit... I've not had any problems mixing the approaches (hybrid Mealy & Moore in the same model). To me the issue is rooted in Steve & Sally's desire to model with a minimum set of concepts (i.e., a minimal language). IMHO, this makes it easy on the translator because there are fewer language constructs to need translations for. Moore and Mealy are provably equivalent (one can be mechanically translated into the other). The hybrid model can be considered to have been started at either pure Moore or pure Mealy and then arbitrarilly translated at specific points. To me, the hybrid model is more expressive in that some things, such as "continuous actions while in a state" are more clearly expressed as state actions. S&M's "looping transition on an arbitrarilly short interval back to the same state with an entry action" to simulate cuntinuous behavior has always been (IMHO) contrived at best. To me, the language should be easy on model writers, not on model translators. If the minimal-language criteria were applied to more contemporary programming languages, we'd only have 3 basic flow-of- control constructs, sequence (linear statements between begin/end), selection (if then/else), and iteration (some variation of do/until). But notice that programming languages have things like case/switch, repeat/until, for-next, etc to make it more convenient for the programmers (at the slight cost of increasing the complexity of the translator/compiler). Bottom line is that I haven't been following the conversation that closely, but if it were up to me I'd say, "relax the S&M language and allow hybrid state models". Back to net.lurker mode, -- steve Subject: Re: (SMU) Synchronous vs. Asynchronous behavior baryh@airmail.net (Bary Hogan) writes to shlaer-mellor-users: -------------------------------------------------------------------- >David.Whipp@infineon.com writes to shlaer-mellor-users: >-------------------------------------------------------------------- >> [I'm not sure what the "official" view is here. >> Is there one?] > >In SM (OOA96 + bridges/wormholes paper), an SDFD is >the endpoint of a wormhole. They can only be invoked >from a bridge. > >This restriction is important because its really nice >to keep stack-based processing out of domains. If an >SDFD could be invoked from an ADFD, and an SDFD can >do anything that an ADFD can do, then an SDFD could >invoke an SDFD -- even itself. Recursion may be >divine, but only in a pure context: for example, the >defintion of a transform. OK, so let's say that I have a state model that was developed prior to the concept of an SDFD, and I decide that it is not really a state model, but just a collection of services. It also just happens to be the case that all the events were generated from the bridge. So after the modification, one domain invokes the wormhole (no transfer vector or return coordinate needed) and the wormhole invokes the synchronous service. Does this bridging separation allow me to implement this interaction asynchronously in my design? Thanks, Bary Hogan Subject: RE: (SMU) Mixed Mealy/Moore (was: is a self directed event specia "Lynch, Chris D. SDX" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp: --------------------------- >If you stick with either pure Mealy or pure Moore, then you >always end up adding transient states. ..in Mealy >you add them when you want an action-on-entry to a state. I would argue that this construct in a Mealy machine is needed only if you're trying to "optimize away" duplicate actions leading to a common state. If so, shouldn't the duplicates be eliminated in the design? -Chris --------------------------------------------------- Chris Lynch Abbott AIS San Diego CA lynchcd@hpd.abbott.com "If you're as clever as you can be when you design it, how will you ever debug it?" Kernighan and Plauger --------------------------------------------------- Subject: Re: (SMU) Synchronous vs. Asynchronous behavior lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Hogan... > I don't share your view of synchronous services in that I don't view > synchronous services as being at any lower level of abstraction than > state actions. I think synchronous services should be able to do > anything an action can do including generating events. This opens up a rather nasty can of worms in that now there would be no limit on the level of nesting of functionality and one could not understand flow of control by looking only at state models and the OCM. In our experience S-M developments are about an order of magnitude easier to maintain than conventional developments. One major reason for this is that flow of control is breadth-first. When I look at a state model (with action descriptions visible) I see every interaction between that model and the rest of the system in a single diagram -- every attribute write, every instance creation/deletion, and every event generated. And I can be absolutely confident that no matter how complex the processing, a transform doesn't do anything except operate on inputs and outputs. (Though it seems to me that a transform could read attribute data without irreparable harm since the transform would be subject to the same level of data integrity as the invoking action.) As soon as I start putting those activities in synchronous services I lose visibility at the SM/OCM level and that is a hindrance to maintenance. Far worse, though, is the anarchy introduced by allowing SSes to call other SSes or the ability to call another object's SS. If that is allowed without the transform restrictions, one is essentially back in the spaghetti code of procedural programming. For me there is considerable value in a methodology/notation that simply does not allow me to shoot myself in the foot. I am prepared to accept some occasional inconvenience in modeling for that overall protection. > It seems like it may be better to keep the bad state model and preserve > the asynchronous communication rather than use synchronous > communication. Or, if there is a way to specify that a service call is > asynchronous, then maybe you haven't sacrificed anything (as far as > keeping the communication asynchronous). This is another justification for keeping SSes simple. One of the benefits of S-M is that the OOA should be portable into any environment. Thus one should be able to port an OOA from a synchronous, single tasking environment to a distributed, asynchronous environment without modification. SSes that generate events and can be invoked sideways may make that a tad tricky. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: RE: (SMU) Mixed Mealy/Moore (was: is a self directed event specia David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- "Lynch, Chris D. SDX" writes to > Responding to Whipp: > >If you stick with either pure Mealy or pure Moore, then you > >always end up adding transient states. ..in Mealy > >you add them when you want an action-on-entry to a state. > > I would argue that this construct in a Mealy machine > is needed only if you're trying to "optimize away" > duplicate actions leading to a common state. Correct. > If so, shouldn't the duplicates be eliminated in the design? No. The elimination of duplication is most important in the context of maintenance. In SM, the models are the source code, not the design. So it is the models that must be maintained. In fact, it is often necessary to _introduce_ duplication into a design in order to increase efficiency. That's what most optimizations are. Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: RE: (SMU) is a self directed event special (was: Anonymous event David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- "Steve Tockey" wrote: > To me, the hybrid model is more expressive in that some > things, such as "continuous actions while in a state" are > more clearly expressed as state actions. S&M's "looping > transition on an arbitrarilly short interval back to the > same state with an entry action" to simulate cuntinuous > behavior has always been (IMHO) contrived at best. Well that's one use for self-directed events that I'd neglected. Probably because it comes near the top of my list of "things you should never do in a model, under any circumstances, even when there's no possible way to avoid it"! > To me, the language should be easy on model writers, > not on model translators. True, but remember that the model _and_ the translator should both be considered to be part of the same project. So all you are doing is moving the complexity from one part of the project to another. The advantage of moving the complexity into the translator is that it becomes a one-time hit, whereas putting complexity into the modeling will become a multiple hit. Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: RE: (SMU) is a self directed event special (was: Anonymous event "Lynch, Chris D. SDX" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Tockey: > --------------------------- > >To me, the language should be easy on model writers, not on model >translators. If the minimal-language criteria were applied to more >contemporary programming languages, we'd only have 3 basic flow-of- >control constructs, sequence (linear statements between begin/end), >selection (if then/else), and iteration (some variation of do/until). >But notice that programming languages have things like case/switch, >repeat/until, for-next, etc to make it more convenient for the >programmers (at the slight cost of increasing the complexity of the >translator/compiler). > >Bottom line is that I haven't been following the conversation that >closely, but if it were up to me I'd say, "relax the S&M language and >allow hybrid state models". I agree, but we need to be careful with the programming-language analogy. State models should not be thought of as arbitrary computer programs, and thus a unbridled language is not necessarily the best thing for the practice. Steve Mellor argued against the use of the full power of statecharts at the embedded systems conference on the basis that the models would become too hard to read. I can see that point but I think OOA would be better served by an argument on the basis of an essential SM state model concept and what notations and features would be needed to support that. For example, the essential concept would contain the rules for events (e.g., the "same data" rule, and "events are never lost") and actions (e.g., "actions take time" and "actions run to completion before another event is received".) Such rules could be expressed in a way so as to exclude the kinds of state modeling extensions which are antithetical to SMOOA (e.g., infinite loops within states) while allowing those which are consistent with the Shlaer and Mellor's vision. -Chris --------------------------------------------------- Chris Lynch Abbott AIS San Diego CA lynchcd@hpd.abbott.com "If you're as clever as you can be when you design it, how will you ever debug it?" Kernighan and Plauger --------------------------------------------------- Subject: RE: (SMU) Mixed Mealy/Moore (was: is a self directed event specia "Lynch, Chris D. SDX" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp: --------------------------- >> (CL) I would argue that this construct in a Mealy machine >> is needed only if you're trying to "optimize away" >> duplicate actions leading to a common state. > >(DW) Correct. > >> If so, shouldn't the duplicates be eliminated in the design? > >No. The elimination of duplication is most important in the >context of maintenance. In SM, the models are the source >code, not the design. So it is the models that must be >maintained. > >In fact, it is often necessary to _introduce_ duplication >into a design in order to increase efficiency. That's >what most optimizations are. I should have been more precise about what I meant by "optimize away": the (sometimes false) economy of code which reduces the length of the program text by moving a common ending sequence out of a multi-way branch and placing it at the beginning of the next block of (unrelated) code. For example, say there are three transition actions leading into a common state. Each of the actions ends with the same sequence of attribute manipulations. (I know what you're thinking; no fair saying I have a bad model! ;-) ) We have a choice of whether to leave the common action-fragments associated with their relatives (i.e., the first part of the action on the transition) or move them to the beginning of the (perhaps unrelated) state-entry action. One choice reduces redundancy but diminishes the cohesion of the state model by separating things which are related (the parts of the action) and putting them with something unrelated. This can very easily reduce the maintainability by making the response to the event more difficult to determine. It can also make things more difficult if later one of the events needs to be handled in a different way, such as to add one more action-step at the end or to delete the formerly common fragment. The other way, of course, introduces the possibility of triple work on changes to the common fragments. I have handled this in the past by making the fragment "callable" and invoking it where needed. (I restrict use of this to the object which "owns" the fragment.) I have also found this style of model to be maintainable and efficient in time and space. It also lets me me do mixed Mealy/Moore with no transient states (and of course, no self-directed, dummy events.) -Chris --------------------------------------------------- Chris Lynch Abbott AIS San Diego CA lynchcd@hpd.abbott.com "If you're as clever as you can be when you design it, how will you ever debug it?" Kernighan and Plauger --------------------------------------------------- Subject: RE: (SMU) Synchronous vs. Asynchronous behavior "Peter J. Fontana" writes to shlaer-mellor-users: -------------------------------------------------------------------- > lahman writes to shlaer-mellor-users: > -------------------------------------------------------------------- > ...one could not understand flow of > control by looking only at state models and the OCM. You show object-level and domain-level service invocations on the Scenario Models (preliminary OCMs). Subject: RE: (SMU) Synchronous vs. Asynchronous behavior Bary D Hogan writes to shlaer-mellor-users: -------------------------------------------------------------------- > > David.Whipp@infineon.com writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > > > I could try to go into all of the different situations in > > which I think synchronous services are useful and why in > > an attempt to validate my argument, but that would take > > a while. Maybe on a later thread. > > That might be useful, because I've never seen a situation > where a bit more analysis couldn't eliminate the need. > Don't bother to enumerate "all" the situations: the > one you feel is most important should suffice. > OK, here goes: 1. When a logically related sequence of processing would otherwise appear multiple times in the models. As an example, consider subtype migration (see "Patterns in OOA" at http://www.kc.com/html/download.html, specifically the Role Migration Pattern). When one subtype migrates to another subtype, then the first subtype that is being deleted must synchronously create the second subtype. Since creating the second subtype may (and probably does) involve more than just creating the instance, it is better to use a synchronous service associated with the second subtype (object based in this case). If I don't use a synchronous service, then every other subtype that can migrate to the second subtype has to know everything about creating it. This would violate the one fact in one place principle. Also, consider any state model that has multiple create or delete states. There is usually a common sequence of actions that is done in every create or delete state for a particular object. > > A bad state model usually indicates a bad OIM, > which sometimes indicates a bad domain chart. > > Petal and Star models can sometimes be eliminated > by moving the actions into the state model that uses > them. Doing this may require you to rework the OIM > to avoid redundancy. Even more often, I find that > the actions belong in another domain. At this point > you often get a lot single-object domains: that's > when you rework the domain-chart to get some > sensible abstractions. I agree that reworking the domain chart, or the information model, or even the control of the domain can usually eliminate "bad behavior". However, when working with existing systems and existing models, it is not always practical to do extensive rework. I like to identify those areas with the highest potential for improvement and make changes when time permits. As Leon Starr said, we're still better off doing an analysis even if it exhibits some of those bad patterns. Thanks, Bary Hogan Lockheed Martin Tactical Aircraft Systems Fort Worth, TX bary.d.hogan@lmco.com (817) 763-2620 Subject: Re: (SMU) is a self directed event special (was: Anonymous event lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > > Yes, but the action associated with Ready is the same one that is > > associated with the return from Biased. So the same action > > on Fetch -> Ready would be a superset that included the same > > activity as on Bias -> Ready. > [...] > > Exactly. They are two different situations and two different events. > > But the action associated with -> Ready is the same. > > That's why you put actions on transitions _and_ on entry > to states. The common actions go on-entry; the differences > go on the transitions. > > If you stick with either pure Mealy or pure Moore, then you > always end up adding transient states. In Moore you add > them when you want an action-on-transition; in Mealy > you add them when you want a action-on-entry to a state. > Synthesis of action-on-exit from a state requires a > hideous number of transient states, plus queued > expediated self-directed events! Yes, but my assumption was that one uses one or the other. > > No more transient than any other state. S1 has its own activity > > associated with it. > > Definintion: In the context of SM, a transient state is one > that unconditionally generates an expediated self-directed > event. Such a state may do any number of other things > including generating other events. Hense the transient > state can be thought of as an action on a transition. This reminds of the field of Thermodynamics: a complex system developed to prove its own assumptions as theorems. I don't see generating a self-directed event as implying anything special about a state or the state model. So I see no need to make such a definition. > The outgoing self-directed event may contain data that is > different to the data on the event that caused the > transition into the state. > > A transient state may generate multiple self-directed > events. This allows an action-on-exit to be synthesised > with no duplication of the on-exit action, but the > cost of doing so is high. It also allows the state to > force a sequence of transitions through the state > machine. But the state only knows that in the current notation. If we remove the event navigation from the state action, then the action has no clue about this so the state is not forcing anything. The *analyst* makes that determination when dealing with the _domain level flow of control_. The analyst looks around for the circumstance that indicates the transition should occur and discovers -- fancy that! -- that a state in the same instance satisfies the condition for that circumstance. So the analyst generates the transition event there. It is serendipity that the generation and transition are in the same instance. Change the domain requirements somewhat and the event might have to be generated elsewhere without modifying the state model at all. > If a self-directed event is not unconditional, then > it will be conditional. The condition could be on: > > 1. attributes in the domain > 2. event data > 3. explicit wormhole return. > > The stereotypical use of (1) is when events are > implemented in data to bypass the event queue. This > use hides itself in many garbs, but is the most > common use of conditional self-directed events. > "Held" events are often synthesised this way. Yes. > (2) indicates that 2 events have been merged into one. > Its almost certainly a modelling error. There are > probably a few pathological examples where it is > acceptable. It is interesting to consider this > case when thinking about an ideal state model > notation from a one-fact-in-one-place perspective. I am not convinced this is a result of combining events. Consider a GUI domain in an air traffic control system where plane Icons that are a safe distance are green, those in danger of collision are red, and those that have just collided are black. An Icon object gets a Status message where the status value is 'collided', which transitions it to the Status Updated state (where it may already be). The associated action changes the color to black and executes the Death Spiral graphic as a synchronous service (just to link up all the active threads) that eventually carries the icon off the bottom the screen with a suitable amount of trailing smoke and flame. Once this work is complete the Status Updated action issues an event that will transition Icon to the Deleted state. I assume you would argue that instead of a single Status event there should have been three separate events for Normal Status, Iffy Status, and Bad Luck Status. In this case the Bad Luck Status event would transition to Icon's Deleted state whose action would change the color to black and execute the Death Spiral so no self-directed event is necessary. (Let's ignore the possibility that we might want to enter the Deleted state after an occasional safe landing.) My counter is that whoever is sending the status information may not know the semantics of the status or its significance to Icon. In particular, to send three events rather than one would require the sending entity to understand the internals of the Icon state machine, which I would regard as a major No-No. To me it is far more natural for Icon to receive a status value and decide, itself, what to do about it. > I can't generalize the use of (3). I've not used it. > > Its possible to mix the conditions: I don't do that > either. You don't have to do something just because > its possible. Since S1 -> S2 is unconditional, this analysis is all very interesting but I am not sure of the point. B-) -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) is a self directed event special (was: Anonymous event lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Tockey... > Just a moment for me to pop out of net.lurker mode. While most of FSM > theory has seeped out of my head too, I do remember a bit... > > I've not had any problems mixing the approaches (hybrid Mealy & Moore in > the same model). To me the issue is rooted in Steve & Sally's desire > to model with a minimum set of concepts (i.e., a minimal language). > IMHO, this makes it easy on the translator because there are fewer > language constructs to need translations for. Perhaps. But there are lots of references to Mealy, Moore, and Harel individually, but I can't recall a reference to Mealy-Moore (Moore-Mealy?) before this thread. I am just worried that there is a reason for that. B-) -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Synchronous vs. Asynchronous behavior Bary D Hogan writes to shlaer-mellor-users: -------------------------------------------------------------------- > > lahman writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Responding to Hogan... > > > I don't share your view of synchronous services in that I don't view > > synchronous services as being at any lower level of abstraction than > > state actions. I think synchronous services should be able to do > > anything an action can do including generating events. > > This opens up a rather nasty can of worms in that now there would be no limit on > the level of nesting of functionality and one could not understand flow of > control by looking only at state models and the OCM. > > In our experience S-M developments are about an order of magnitude easier to > maintain than conventional developments. One major reason for this is that flow > of control is breadth-first. When I look at a state model (with action > descriptions visible) I see every interaction between that model and the rest of > the system in a single diagram -- every attribute write, every instance > creation/deletion, and every event generated. And I can be absolutely confident > that no matter how complex the processing, a transform doesn't do anything except > operate on inputs and outputs. (Though it seems to me that a transform could > read attribute data without irreparable harm since the transform would be subject > to the same level of data integrity as the invoking action.) As soon as I start > putting those activities in synchronous services I lose visibility at the SM/OCM > level and that is a hindrance to maintenance. Wow! You make action descriptions visible on your state model? You must have a lot of simple state models (which is a good thing). Actually, I understand what you're saying. I like to being able to see which events are generated from a particular state, etc. from the state model diagram. (But as Fontana pointed out, you can put synchronous invocations on the OCM. These would only show up between objects, but internal service calls would be visible in the state actions.) > > Far worse, though, is the anarchy introduced by allowing SSes to call other SSes > or the ability to call another object's SS. If that is allowed without the > transform restrictions, one is essentially back in the spaghetti code of > procedural programming. For me there is considerable value in a > methodology/notation that simply does not allow me to shoot myself in the foot. > I am prepared to accept some occasional inconvenience in modeling for that > overall protection. I understand your concern, but I'm afraid that the cat is already out of the bag so to speak. I could be wrong, but my perception is that many S-M camps already use synchronous services in a variety of ways. My concern is how to establish guidelines to avoid abusing them (which may be difficult). > > > It seems like it may be better to keep the bad state model and preserve > > the asynchronous communication rather than use synchronous > > communication. Or, if there is a way to specify that a service call is > > asynchronous, then maybe you haven't sacrificed anything (as far as > > keeping the communication asynchronous). > > This is another justification for keeping SSes simple. One of the benefits of > S-M is that the OOA should be portable into any environment. Thus one should be > able to port an OOA from a synchronous, single tasking environment to a > distributed, asynchronous environment without modification. SSes that generate > events and can be invoked sideways may make that a tad tricky. And, calls to synchronous services that didn't _need_ to be synchronous can't necessarily be implemented asynchronously. (Although we are considering using tagging in the design to do just that.) Thanks for the feedback! Bary Hogan Lockheed Martin Tactical Aircraft Systems Fort Worth, TX bary.d.hogan@lmco.com (817) 763-2620 Subject: RE: (SMU) Synchronous vs. Asynchronous behavior David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- baryh@airmail.net (Bary Hogan) writes to shlaer-mellor-users: > So after the modification, one domain invokes the wormhole > (no transfer vector > or return coordinate needed) and the wormhole invokes the > synchronous service. > Does this bridging separation allow me to implement this interaction > asynchronously in my design? It should do. It depends on your architecture. Dave. Subject: RE: (SMU) is a self directed event special (was: Anonymous event David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote > [Responding to me] > > Definintion: In the context of SM, a transient state is one > > that unconditionally generates an expediated self-directed > > event. Such a state may do any number of other things > > including generating other events. Hense the transient > > state can be thought of as an action on a transition. > > This reminds of the field of Thermodynamics: a complex system > developed to prove its own assumptions as theorems. I don't > see generating a self-directed event as implying anything > special about a state or the state model. So I see no need > to make such a definition. An state that unconditionally generates a self-directed event will never remain in the state long enough to receive an event from any other instance. Furthermore, at the end of its action, the event to move it out of the state is already at the front of self's queue. So it seems reasonable to use the term "transient" to describe this condition. I find the using the term "transient state" is more useful that "state that unconditionally generates a self-directed event" for the same reason that one uses the term "observer" instead of writing "object that registers itself to be notified when another object is updated". It provides a clean encapsulation of a concept. In fact, I do regard "transient state" as a real pattern in SM analysis. > But the state only knows that in the current notation. > If we remove the event navigation from the state action, > then the action has no clue about this so the state is > not forcing anything. Which takes us back full circle. I listed a number of uses for self-directed events. You then added a few more, which I then got diverted into arguing about. Patterns like "transient state" rely on the fact that self-direct events are expediated. If it is not obvious on the STD that the event is self-directed then it would not be obvious that the state is transient. It would be wrong to move this information out of the state model. No other instance in the system needs to know, and there's no communication between different instances, so the OCM doesn't need to know. > The *analyst* makes that determination when dealing with the > _domain level flow of control_. No. You argued yourself that the fetched->Ready event is introduced by the analyst looking at the _object level_ flow of control. Transient state, force paths, on-exit synthesis are all patterns for on object-level flow of control. "Held-event in data" is perhaps different: it doesn't always rely on expediation. but when it does, the reason for that is object-level, not domain-level. > The analyst looks around for the circumstance that > indicates the transition should occur and discovers > -- fancy that! -- that a state in the same instance > satisfies the condition for that circumstance. I hope not. That would be coincidental cohesion: the introduction of encapsulation based on similarity of code, not problem. The discovery process should be the inverse. The analyst looks at the domain flow of control and see's that there's one event to get from state-A to state-B. But the object needs some work done before B's action can be used. The solution is to introduce a transient state. You used the example of cold and warm reset. Both events send the object to its reset state, but cold-reset needs extra work before it gets there. The domain will only provide 1 event for the cold- reset, so a transient state is introduced to do the extra work. (of course, if the object-level flow of control indicated that the resets resulted in different states, then you wouldn't need the transient state.) You gave another example: an emergency power-down sequence of transitions that reled on a sequence of expediated events. This would not be guarenteed to work if the event's weren't expediated (events from other instances could interfere). So the action that generates the events is relying on a property of the routing. Again, it's an object-level flow of control issue, not a domain-level flow. The only interaction with the domain's flow is that not events from other objects are delivered until the sequence is complete. > So the analyst generates the transition event > there. It is serendipity that the generation and > transition are in the same instance. Change the > domain requirements somewhat and the event might have to be > generated elsewhere without modifying the state model at all. I think you need to follow this through. It is not self-consistent. If another instance generates the event, and you don't modify the state model, then the event will be generated twice. > I am not convinced this is a result of > combining events. Consider a GUI domain in an air traffic > control system where plane Icons that are a safe distance > are green, those in danger of collision are red, and > those that have just collided are black. An Icon object > gets a Status message where the status value is 'collided', > which transitions it to the Status Updated state (where it > may already be). This all sounds (a) contrived and (b) polluted. Why do Icons know about collisions of aircraft? If you do manage to demonstate that its all one domain, then you'll still need to explain why the action that detected the collision (by looking at the distance) can't send a "planes collided" event instead of a "status" event. Its a good bet that a "status" event indicates incomplete analysis. > My counter is that whoever is sending the status information > may not know the semantics of the status or its significance > to Icon. In particular, to send three events rather than one > would require the sending entity to understand the internals > of the Icon state machine, which I would regard as a major > No-No. To me it is far more natural for Icon to receive a > status value and decide, itself, what to do about it. But the sender will understand the mission of the domain. In fact, the sender is an integral part of the domain's mission (its in the domain). If the domain needs to distinguish collisions from near misses, then the sender (which must decode these conditions if it is to send "collided" or "near miss" data for a status event) can send different events. If the receiver wants to handle two events in an identical way then it can have two transitions between two states: one for collisions, the other for near misses. The sender sends two events because the _domain_ distinguishes the events. No assumption is made about how the receiver will handle them. It can handle them as a single event if it so desires. > Since S1 -> S2 is unconditional, this analysis is all very > interesting but I am not sure of the point. B-) The point is to understand events and the patterns of their use. The conditional case is more interesting than the unconditional, so I explore it. Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: RE: (SMU) Synchronous vs. Asynchronous behavior David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Bary D Hogan writes to shlaer-mellor-users: > When one subtype migrates to another subtype, then the first subtype > that is being deleted must synchronously create the second subtype. > Since creating the second subtype may (and probably does) involve more > than just creating the instance, it is better to use a synchronous > service associated with the second subtype (object based in this > case). If I don't use a synchronous service, then every other subtype > that can migrate to the second subtype has to know everything about > creating it. This would violate the one fact in one place principle. I don't like the pattern. Why not simply send a creation event to the subtype being created? The creation state can handle the common creation stuff. In the specific example from KC, they use IOOA-specific concepts like link/unlink and instance handles. OOA96 has neither of these things, so the deletion states would be trivial in an OOA96 model. Link and Unlink are horrible concepts. They are completely redundant. Don't use them! > Also, consider any state model that has multiple create or delete > states. There is usually a common sequence of actions that is done in > every create or delete state for a particular object. Why not give me a specific example. Lets see the kind of things that are duplicated. Then we can get rid of them. > I agree that reworking the domain chart, or the information model, or > even the control of the domain can usually eliminate "bad behavior". > However, when working with existing systems and existing models, it is > not always practical to do extensive rework. I like to identify those > areas with the highest potential for improvement and make changes when > time permits. As Leon Starr said, we're still better off doing an > analysis even if it exhibits some of those bad patterns. Ah well, if you're looking for ways to hack a bad model, then you might need some bad techniques. As a short term hack, you can use whatever your tool vendor allows. Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: (SMU) Re: shlaer-mellor-users-digest V98 #98 TGett@aol.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Anyone care to guess what the Vnn #nn will be in the first Shlaer-Mellor-users-digest of the new millenium? What will it be, what will it be....? tgett Subject: Re: (SMU) Synchronous vs. Asynchronous behavior lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Fontana... > > ...one could not understand flow of > > control by looking only at state models and the OCM. > > You show object-level and domain-level service invocations on the Scenario > Models (preliminary OCMs). But the *content* of the SS would not be visible. My point was that if those SSes are doing things like generating events, those activities that would have been visible are now hidden at the SM/OCM level. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Synchronous vs. Asynchronous behavior baryh@airmail.net (Bary Hogan) writes to shlaer-mellor-users: -------------------------------------------------------------------- On Tue, 7 Dec 1999 20:45:03 -0800 , you wrote: >David.Whipp@infineon.com writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >Bary D Hogan writes to shlaer-mellor-users: > >> When one subtype migrates to another subtype, then the first subtype >> that is being deleted must synchronously create the second subtype. >> Since creating the second subtype may (and probably does) involve more >> than just creating the instance, it is better to use a synchronous >> service associated with the second subtype (object based in this >> case). If I don't use a synchronous service, then every other subtype >> that can migrate to the second subtype has to know everything about >> creating it. This would violate the one fact in one place principle. > >I don't like the pattern. Why not simply send a creation event >to the subtype being created? The creation state can handle >the common creation stuff. That is the way we did it originally. The problem is that you need to ensure that one subtype is deleted and the other is created within the "protection" of a single action. This problem was discussed on this group some time ago. At the time I suggested that the Create event to the second subtype could be treated as self-directed such that it would be received in the second subtype prior to the supertype accepting any polymorphic events. That idea didn't seem to be very well received, but that is how we implemented this prior to OOA 96 (actually we implemented these synchronously which effectively gave them priority). Treating these "migration" events as self-directed solves the problem of accepting polymorphic events without having an active subtype, but ignores the problem of not _having_ an active subtype. Super/subtype relationships are not conditional, so you should always have a subtype for an instance of a supertype. Even a self-directed event takes time. You could argue that you should delete the supertype instance also, then re-create it with the new subtype. That doesn't seem natural to me. The instance of something in the real world is not going away, so there should be no time in which it doesn't exist. I thought that the consensus (if you can call it that) from the previous discussion was that this should be done synchronously. I didn't like it at the time, but it seems to be best solution given the available constructs. > >In the specific example from KC, they use IOOA-specific >concepts like link/unlink and instance handles. OOA96 >has neither of these things, so the deletion states >would be trivial in an OOA96 model. Link and Unlink >are horrible concepts. They are completely redundant. >Don't use them! Well, I don't have much choice, but I think that the Bridgepoint action language uses link/unlink also. I agree with you about the problems with having referential attributes and the link/unlink concept. [I've actually been warming up to the idea of eliminating referential attributes in a UML type of notation, but that's a different thread.] The point I would like to make here is that there could be many things that go along with creating the next subtype (or any object). The new instance may have other relationships that may need to be conditionally established. What about unique attributes of the new instance? It just doesn't make sense for the subtype that is being deleted to know everything about creating every other subtype that it could migrate to. > >> Also, consider any state model that has multiple create or delete >> states. There is usually a common sequence of actions that is done in >> every create or delete state for a particular object. > >Why not give me a specific example. Lets see the kind of things >that are duplicated. Then we can get rid of them. > Have you never seen a valid use of multiple create or delete states? Unfortunately, I can't use very many specific examples, but I saw a case just the other day in which an instance of an object could be deleted due to an event with supplemental data, and it could also be deleted due to a timer event (which can't have supplemental data). I thought that two delete states made sense for this problem. I suppose it could have been done differently if you want to use "transient" states and self-directed events. > >> I agree that reworking the domain chart, or the information model, or >> even the control of the domain can usually eliminate "bad behavior". >> However, when working with existing systems and existing models, it is >> not always practical to do extensive rework. I like to identify those >> areas with the highest potential for improvement and make changes when >> time permits. As Leon Starr said, we're still better off doing an >> analysis even if it exhibits some of those bad patterns. > >Ah well, if you're looking for ways to hack a bad model, then >you might need some bad techniques. As a short term hack, you >can use whatever your tool vendor allows. So you think that using synchronous services is a bad technique? If so, then I disagree. I do agree that they can be abused, but I think that there valid uses for them also. [BTW, I'm not looking for ways to hack bad models. I'm looking for ways to guide people to better modeling, and ways to improve existing models that work, so I'm not admitting to having bad models :-)] Thanks, Bary Hogan Subject: Re: (SMU) Synchronous vs. Asynchronous behavior lschneid@eng.delcoelect.com (Lee Riemenschneider) writes to shlaer-mellor-users: -------------------------------------------------------------------- >baryh@airmail.net (Bary Hogan) writes to shlaer-mellor-users: >-------------------------------------------------------------------- >>David.Whipp@infineon.com writes to shlaer-mellor-users: >>-------------------------------------------------------------------- >> >>Bary D Hogan writes to shlaer-mellor-users: >> >>> When one subtype migrates to another subtype, then the first subtype >>> that is being deleted must synchronously create the second subtype. >>> Since creating the second subtype may (and probably does) involve more >>> than just creating the instance, it is better to use a synchronous >>> service associated with the second subtype (object based in this >>> case). If I don't use a synchronous service, then every other subtype >>> that can migrate to the second subtype has to know everything about >>> creating it. This would violate the one fact in one place principle. >> >>I don't like the pattern. Why not simply send a creation event >>to the subtype being created? The creation state can handle >>the common creation stuff. > >That is the way we did it originally. The problem is that you need to ensure >that one subtype is deleted and the other is created within the "protection" of >a single action. This problem was discussed on this group some time ago. At >the time I suggested that the Create event to the second subtype could be >treated as self-directed such that it would be received in the second subtype >prior to the supertype accepting any polymorphic events. That idea didn't seem >to be very well received, but that is how we implemented this prior to OOA 96 >(actually we implemented these synchronously which effectively gave them >priority). Treating these "migration" events as self-directed solves the >problem of accepting polymorphic events without having an active subtype, but >ignores the problem of not _having_ an active subtype. Super/subtype >relationships are not conditional, so you should always have a subtype for an >instance of a supertype. Even a self-directed event takes time. > >You could argue that you should delete the supertype instance also, then >re-create it with the new subtype. That doesn't seem natural to me. The >instance of something in the real world is not going away, so there should be no >time in which it doesn't exist. > Not long ago we had a discussion similar to this supertype/subtype one. I don't know if we reached consensus or not since it seemed that the arguments to what I said actually agreed with what I said. :-) I guess I must have expressed myself badly, so I'll try again. (Glutton for punishment. ;-) ) You cannot delete the subtype instance without deleting the supertype instance, because they are both THE instance. You cannot create a subtype instance without creating a supertype instance, and you cannot delete a subtype instance without deleting the supertype instance. This is perfectly natural, because the subtype/supertype instance is an instance of the same object. I think taking a view of them being seperate instances would make it hard to accept that you could have instances of different subtypes of the same supertype in existance at the same time. Clear as mud? :-) =========================================================================== Lee W. Riemenschneider lee.w.riemenschneider@delphiauto.com Software Engineer OS/2 user PURDUE alumnus and die-hard FAN! =========================================================================== Subject: Re: (SMU) Synchronous vs. Asynchronous behavior "Douglas E. Forester" writes to shlaer-mellor-users: -------------------------------------------------------------------- On Wed, 8 Dec 1999, Lee Riemenschneider wrote: > lschneid@eng.delcoelect.com (Lee Riemenschneider) writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > [snip] > > You cannot delete the subtype instance without deleting the supertype > instance, because they are both THE instance. You cannot create a subtype > instance without creating a supertype instance, and you cannot delete a > subtype instance without deleting the supertype instance. This is perfectly > natural, because the subtype/supertype instance is an instance of the same > object. I think taking a view of them being seperate instances would make it > hard to accept that you could have instances of different subtypes of the same > supertype in existance at the same time. >From an analysis point of view, you are wrong, Lee. Looking at the OIM you clearly see the supertype and subtype objects. The subtype objects are related to the supertype object in a {disjoint,complete} relationship. It is the analyst's responsibility to assure that the relationships are consistent when they need to be. I think this is the key in subtype migration. While migrating a subtype, you need to make sure that the rest of the world either doesn't care about the super/sub set of instances, of that it sees a consistent view. That is the analyst's responsibility. Your architecture can take any view it wishes about super/sub types. It can implement them as separate instances or by using inheritance. -------------------------------------------------------------- Douglas E. Forester dougf@projtech.com Tucson, Arizona Project Technology, Inc. (520) 544-2881 X13 Fax (520)544-2912 Visual Modeling Tools & Code Generation with Shlaer-Mellor OOA Subject: RE: (SMU) Synchronous vs. Asynchronous behavior "Peter J. Fontana" writes to shlaer-mellor-users: -------------------------------------------------------------------- > lahman writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Responding to Fontana... > > > > ...one could not understand flow of > > > control by looking only at state models and the OCM. > > > > You show object-level and domain-level service invocations on > the Scenario > > Models (preliminary OCMs). > > But the *content* of the SS would not be visible. My point was > that if those > SSes are doing things like generating events, those activities > that would have > been visible are now hidden at the SM/OCM level. All OCM-interesting things that are done in a service are shown on the OCM - from that service - just as if it was a state action. eg: on the Sequence Chart, if a service generated an event, that event is shown. _______________________________________________________ Pathfinder Solutions Inc. www.pathfindersol.com | 888-OOA-PATH | | effective solutions for software engineering challenges| | Peter Fontana voice: +01 508-384-1392 | peterf@pathfindersol.com fax: +01 508-384-7906 | _______________________________________________________| Subject: Re: (SMU) Synchronous vs. Asynchronous behavior lschneid@eng.delcoelect.com (Lee Riemenschneider) writes to shlaer-mellor-users: -------------------------------------------------------------------- "Douglas E. Forester" writes to shlaer-mellor-users: -------------------------------------------------------------------- >On Wed, 8 Dec 1999, Lee Riemenschneider wrote: > >> You cannot delete the subtype instance without deleting the supertype >> instance, because they are both THE instance. You cannot create a subtype >> instance without creating a supertype instance, and you cannot delete a >> subtype instance without deleting the supertype instance. This is perfectly >> natural, because the subtype/supertype instance is an instance of the same >> object. I think taking a view of them being seperate instances would make it >> hard to accept that you could have instances of different subtypes of the same >> supertype in existance at the same time. > >From an analysis point of view, you are wrong, Lee. Looking at the >OIM you clearly see the supertype and subtype objects. The subtype >objects are related to the supertype object in a {disjoint,complete} >relationship. It is the analyst's responsibility to assure that the >relationships are consistent when they need to be. > OK. I'll buy that. I may have shot off my mouth too quick. :-) Migration is the key word I failed to comprehend. The view I expressed above is much more fitting to the non-migratory case. Apologies. Of course, on the OIM, you aren't looking at an instance. In a dynamic view, the supertype doesn't have multiple subtypes hanging off of it. >I think this is the key in subtype migration. While migrating a >subtype, you need to make sure that the rest of the world either >doesn't care about the super/sub set of instances, of that it sees a >consistent view. That is the analyst's responsibility. > Sometimes it can give one a migration migraine! :-) >Your architecture can take any view it wishes about super/sub types. >It can implement them as separate instances or by using inheritance. I have absolutely no argument with this. >-------------------------------------------------------------- >Douglas E. Forester dougf@projtech.com Tucson, Arizona >Project Technology, Inc. (520) 544-2881 X13 Fax (520)544-2912 >Visual Modeling Tools & Code Generation with Shlaer-Mellor OOA > Tucson. Did you know Cortland Starett back when he was at IBM? =========================================================================== Lee W. Riemenschneider lee.w.riemenschneider@delphiauto.com Software Engineer OS/2 user PURDUE alumnus and die-hard FAN! =========================================================================== Subject: Re: (SMU) Synchronous vs. Asynchronous behavior Bary D Hogan writes to shlaer-mellor-users: -------------------------------------------------------------------- > > Not long ago we had a discussion similar to this supertype/subtype one. I > don't know if we reached consensus or not since it seemed that the arguments > to what I said actually agreed with what I said. :-) I guess I must have > expressed myself badly, so I'll try again. (Glutton for punishment. ;-) ) > > You cannot delete the subtype instance without deleting the supertype > instance, because they are both THE instance. You cannot create a subtype > instance without creating a supertype instance, and you cannot delete a > subtype instance without deleting the supertype instance. This is perfectly > natural, because the subtype/supertype instance is an instance of the same > object. I think taking a view of them being seperate instances would make it > hard to accept that you could have instances of different subtypes of the same > supertype in existance at the same time. > > Clear as mud? :-) Your point is clear, I just disagree with it. As long as your in the context of a single action, then you can delete one subtype and create the next one without deleting the supertype. As you said, the supertype/subtype combination is an instance of the same thing. We are modeling that thing migrating from one role to the next, so it never ceases to exist. If I delete the supertype, then I have to create it again and potentially re-populate the instance data with the same data it had when I deleted it. That doesn't make sense. As long as we leave the model consistent by the end of the action, then we have followed the principles of the method. Bary Hogan Lockheed Martin Tactical Aircraft Systems Fort Worth, TX bary.d.hogan@lmco.com (817) 763-2620 Subject: Re: (SMU) Synchronous vs. Asynchronous behavior "Douglas E. Forester" writes to shlaer-mellor-users: -------------------------------------------------------------------- On Wed, 8 Dec 1999, Bary D Hogan wrote: > Bary D Hogan writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Not long ago we had a discussion similar to this supertype/subtype one. I > > don't know if we reached consensus or not since it seemed that the arguments > > to what I said actually agreed with what I said. :-) I guess I must have > > expressed myself badly, so I'll try again. (Glutton for punishment. ;-) ) > > > > You cannot delete the subtype instance without deleting the supertype > > instance, because they are both THE instance. You cannot create a subtype > > instance without creating a supertype instance, and you cannot delete a > > subtype instance without deleting the supertype instance. This is perfectly > > natural, because the subtype/supertype instance is an instance of the same > > object. I think taking a view of them being seperate instances would make it > > hard to accept that you could have instances of different subtypes of the same > > supertype in existance at the same time. > > > > Clear as mud? :-) > > Your point is clear, I just disagree with it. As long as your in the > context of a single action, then you can delete one subtype and create > the next one without deleting the supertype. As you said, the > supertype/subtype combination is an instance of the same thing. We are > modeling that thing migrating from one role to the next, so it never > ceases to exist. If I delete the supertype, then I have to create it > again and potentially re-populate the instance data with the same data > it had when I deleted it. That doesn't make sense. As long as we > leave the model consistent by the end of the action, then we have > followed the principles of the method. Bary, if you choose to delete the supertype and subtype and recreate them, _and_ the analysis is done such that this is OK, then it may make perfect sense to do it. The definition of OK is while migrating a subtype, you need to make sure that the rest of the world either doesn't care about the super/sub set of instances, or that it sees a consistent view. That is the analyst's responsibility. Doug Forester -------------------------------------------------------------- Douglas E. Forester dougf@projtech.com Tucson, Arizona Project Technology, Inc. (520) 544-2881 X13 Fax (520)544-2912 Visual Modeling Tools & Code Generation with Shlaer-Mellor OOA Subject: RE: (SMU) Synchronous vs. Asynchronous behavior David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- baryh@airmail.net (Bary Hogan) writes to shlaer-mellor-users: > That is the way we did it originally. The problem is that > you need to ensure that one subtype is deleted and the > other is created within the "protection" of a single action. > [...]. Super/subtype relationships are not conditional, so > you should always have a subtype for an instance of a > supertype. You can have as many (or few) subtype instances as you want. The important thing is that the analyst must ensure they are not used until the relationship is in a consistant state: i.e. there is exactly one subtype instance. > I agree with you about the problems with having > referential attributes and the link/unlink concept. [I've > actually been warming up to the idea of eliminating > referential attributes in a UML type of notation, > but that's a different thread.] That is a viable alternatative. Its just the mixing that doesn't work. > The point I would like to make here is that there could be > many things that go along with creating the next subtype > (or any object). The new instance may have other > relationships that may need to be conditionally > established. What about unique attributes of the new > instance? It just doesn't make sense for the subtype that > is being deleted to know everything about creating every > other subtype that it could migrate to. Yes, there may be work to be done in creating the new subtype. But the natuaral place to describe this work is in a creation state of the created subtype. If the old subtype has unique information that determines values in the new subtype, then you can put the creation in the old subtype. But then, by definition ("unique"), no other potential old-subtype will share this. > >Why not give me a specific example. Lets see the kind of things > >that are duplicated. Then we can get rid of them. > > Have you never seen a valid use of multiple create or delete > states? Yes, its easy to find examples of multiple creation/deletion states. But different states are different for a reason: they do different things!. What I was asking for was an example that shows the necsessity for duplicated actions between multiple creation/deletion states. > Unfortunately, I can't use very many specific examples, but > I saw a case just the other day in which an instance of an > object could be deleted due to an event with supplemental > data, and it could also be deleted due to a timer event > (which can't have supplemental data). I thought that two > delete states made sense for this problem. I suppose it > could have been done differently if you want to use > "transient" states and self-directed events. There's no problem having multiple delete states (unless they're the same state!). However, you probably know that OOA96 eliminated timers in favour of delayed events (which can carry data). So if you want timer functionality then you have to use standard wormholes/bridging ... so the incoming event from your timer could be given some data. > >Ah well, if you're looking for ways to hack a bad model, then > >you might need some bad techniques. As a short term hack, you > >can use whatever your tool vendor allows. > > So you think that using synchronous services is a bad technique? I didn't say that they are bad: simply that the question is irrelevant if your aim is to get a bad model working > If so, then I disagree. I do agree that they can be abused, > but I think that there valid uses for them also. But actually: yes, I have yet to be convinced that the encapsulation provided by synchronous services provides any real benefits. I've never needed to use them. (Actually, I did once, but that was in the state action of a model that was horribly polluted. At at time it was a single domain. I beleive that it is now approaching 10 domains, now that it has been properly factored. The horrible state model that required the service no longer needs it.) > [BTW, I'm not looking for ways to hack bad models. I'm > looking for ways to guide people to better modeling, and > ways to improve existing models that work, so I'm not > admitting to having bad models :-)] Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: Re: (SMU) Synchronous vs. Asynchronous (Lahman: OLC is 10 times Bob Lechner writes to shlaer-mellor-users: -------------------------------------------------------------------- This is an interesting discussion about event-driven (asynch) and shared-memory access by 'friend' classes (sync). Lahman claims 10-fold easier maintenance since control flow is lateral among message-passing 'peers', instead of not depth-first downthe call tree. In the old days, structured analysis produced a hierarchy of DataFlow Diags (DFDs) to show the lateral flow at higher levels analogous to a hierarchy of State Models. Analysts didn't want to drill down into the code, but just spec the required info and behavior models and let programmers complete the implementation. Problem was: DFDs were not maintained thru changes, so they became obsolete and were not trusted by maintenance. State models show control flow, not data flow, but the flow is lateral thru event communications like data-flow diagrams. Visibility is now turned inside out by info-hiding. Then, info structures were exposed but process implementations were hidden inside of nodes. Now we see state actions that decompose transaction processes into fragments, but the info is less visible - (hidden inside action routines which access data members). Now we have less of a need to know HOW the data is transformed, but there is more need to produce declarative specs of interfaces before implementation and more tools such as code generation to support them. Lateral control flow models simplify the definition of pre- and post-conditions which are more complete and more adequately express the behavior we want built or revised and tested. (It should be noted that state-box models in the Clean-Room approach also depict lateral control flow without requiring asyncronous message passing unless required for distributed processing without shared memory. [Ref: Witt/Baker/Merritt: Software Architecture & Design, VanNostrandReinhold 1994]) Bob Lechner Computer Science Dept. UMass Lowell lechner@cs.uml.edu //www.cs.uml.edu/~lechner Forwarded message: Subject: RE: (SMU) Synchronous vs. Asynchronous behavior Bary D Hogan writes to shlaer-mellor-users: -------------------------------------------------------------------- > > David.Whipp@infineon.com writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > baryh@airmail.net (Bary Hogan) writes to shlaer-mellor-users: > > > That is the way we did it originally. The problem is that > > you need to ensure that one subtype is deleted and the > > other is created within the "protection" of a single action. > > [...]. Super/subtype relationships are not conditional, so > > you should always have a subtype for an instance of a > > supertype. > > You can have as many (or few) subtype instances as you want. > The important thing is that the analyst must ensure they are > not used until the relationship is in a consistant state: i.e. > there is exactly one subtype instance. > In general, it seems that the best way to ensure this is to accomplish the deletion and creation within the same action. If the supertype accepts polymorphic events which can arrive at any time, then I need to be in one subtype or the other when an event is accepted. Somehow it seems wrong to me to allow the supertype instance to exist with no subtype instances (or multiple subtype instances) for some period of time. I can protect it within the context of a state action. How do I protect it outside of that context? > > The point I would like to make here is that there could be > > many things that go along with creating the next subtype > > (or any object). The new instance may have other > > relationships that may need to be conditionally > > established. What about unique attributes of the new > > instance? It just doesn't make sense for the subtype that > > is being deleted to know everything about creating every > > other subtype that it could migrate to. > > Yes, there may be work to be done in creating the new > subtype. But the natuaral place to describe this work > is in a creation state of the created subtype. If the > old subtype has unique information that determines values > in the new subtype, then you can put the creation in > the old subtype. But then, by definition ("unique"), no > other potential old-subtype will share this. Yes, putting it in the creation state is a perfect place to put it, unless I'm creating it synchronously. :-) With synchronous creations, the instance is created in a state, but the state is not executed (at least that's the way I understand it). > > > > >Why not give me a specific example. Lets see the kind of things > > >that are duplicated. Then we can get rid of them. > > > > Have you never seen a valid use of multiple create or delete > > states? > > Yes, its easy to find examples of multiple creation/deletion > states. But different states are different for a reason: they > do different things!. What I was asking for was an example > that shows the necsessity for duplicated actions between > multiple creation/deletion states. I agree that multiple creation or deletion states are needed because they do different things. I don't have a specific example in mind, but it seems fairly plausible that they would also have common things that would be better documented as a common service and not duplicated. Bary Hogan Lockheed Martin Tactical Aircraft Systems Fort Worth, TX bary.d.hogan@lmco.com (817) 763-2620 Subject: (SMU) OOA 96 Timers (was Synchronous vs. Asynchronous behavior) Bary D Hogan writes to shlaer-mellor-users: -------------------------------------------------------------------- > > David.Whipp@infineon.com writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > > There's no problem having multiple delete states (unless > they're the same state!). However, you probably know that > OOA96 eliminated timers in favour of delayed events > (which can carry data). So if you want timer functionality > then you have to use standard wormholes/bridging ... so the > incoming event from your timer could be given some data. > Interesting. I looked this up in the OOA96 report (section 9.4) and that wasn't clear to me. It just has three process bubbles shown, and each only accepts an identifier as input. I also don't see anything in the text that addresses supplemental data. Am I looking in the wrong place? I can see how having supplemental data on a delayed event could make sense. The tool we use doesn't yet support this style of timer, so we haven't delt with that yet. Do any of the SM tools support OOA96 timers, and if so do they allow supplemental data? I not sure I understand what you mean by "if you want timer functionality then you have to use standard wormholes/bridging". Are you talking about bridging to/from the architecture domain to provide the timer functionality? Thanks, Bary Hogan Lockheed Martin Tactical Aircraft Systems Fort Worth, TX bary.d.hogan@lmco.com (817) 763-2620 Subject: Re: (SMU) Synchronous vs. Asynchronous behavior Bary D Hogan writes to shlaer-mellor-users: -------------------------------------------------------------------- "Douglas E. Forester" writes to shlaer-mellor-users: > > Bary, if you choose to delete the supertype and subtype and recreate > them, _and_ the analysis is done such that this is OK, then it may > make perfect sense to do it. The definition of OK is while migrating > a subtype, you need to make sure that the rest of the world either > doesn't care about the super/sub set of instances, or that it sees > a consistent view. That is the analyst's responsibility. > OK, I'll buy the fact that you _can_ delete both the supertype and subtype instances and then recreate them. I'm just not sure why you would ever want to do that. It doesn't seem to model what is really going on. Also, I'm a bit nervous about leaving things hanging out there, and claiming that it's OK since I know that no one uses it during that time. It seems safer and more robust and more maintainable to make it consistent by the end of an action, especially when it is easy to do so. Thanks, Bary Hogan Lockheed Martin Tactical Aircraft Systems Fort Worth, TX bary.d.hogan@lmco.com (817) 763-2620 Subject: RE: (SMU) Synchronous vs. Asynchronous behavior David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Bary D Hogan writes to shlaer-mellor-users: > In general, it seems that the best way to ensure this is to accomplish > the deletion and creation within the same action. If the supertype > accepts polymorphic events which can arrive at any time, then > I need to > be in one subtype or the other when an event is accepted. Somehow it > seems wrong to me to allow the supertype instance to exist with no > subtype instances (or multiple subtype instances) for some period of > time. I can protect it within the context of a state action. > How do I protect it outside of that context? In the simultaneous interpretation of time, you can't use actions as an atomic unit. So you have to analysise the sychonization even if you do synchronous creation or deletion. How do you prevent simultaneous actions from messing things up? You have to ensure that the analysis contains a sequential constraint between all accesses to the data and your modifications. Sequential constraint flows along dataflows, events and transitions. SM is quite weak when it comes to abstractions of synchronization. > Yes, putting it in the creation state is a perfect place to put it, > unless I'm creating it synchronously. :-) With synchronous creations, > the instance is created in a state, but the state is not executed (at > least that's the way I understand it). You could create it synchronously then send an event to it ;-). > I agree that multiple creation or deletion states are > needed because they do different things. I don't have a > specific example in mind, but it seems fairly plausible > that they would also have common things that would be > better documented as a common service and not duplicated. Plausible, yes. But I can't think of any examples either. If something it really needed, then it shouldn't be too difficult for its advocate's to find one example... Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: RE: (SMU) OOA 96 Timers (was Synchronous vs. Asynchronous behavio David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Bary D Hogan wrote: > Interesting. I looked this up in the OOA96 report (section 9.4) and > that wasn't clear to me. It just has three process bubbles shown, and > each only accepts an identifier as input. I also don't see > anything in the text that addresses supplemental data. Am I > looking in the wrong place? You're right. Supplemental data isn't mentioned. I just assumed that the only thing special about delayed events is that they're delayed. (The pathologically pedantic may wish to consider the semantics of delayed self-directed events ;-) ) > I can see how having supplemental data on a delayed event could > make sense. The tool we use doesn't yet support this style of > timer, so we haven't delt with that yet. Do any of the SM tools > support OOA96 timers, and if so do they allow supplemental data? I don't know. > I not sure I understand what you mean by "if you want timer > functionality then you have to use standard wormholes/bridging". > Are you talking about bridging to/from the architecture domain > to provide the timer functionality? The interface to pre-OOA96 timers can be realised using OOA96 wormholes. Thus you can still use the interface in OOA96. However, you will need to implement the timer in your own service domain because the architecture is no longer required to provide it. Once you have your own implementation, then you can tweak it to do whatever you want. Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: Re: (SMU) OOA 96 Timers (was Synchronous vs. Asynchronous "Dana Simonson" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Bary D Hogan >I can see how having supplemental data on a delayed event could make >sense. The tool we use doesn't yet support this style of timer, so we >haven't delt with that yet. Do any of the SM tools support OOA96 >timers, and if so do they allow supplemental data? FWIW, our tool implements delayed events using the process 'generate after'. It is identical to a generate (any number of supplemental parameters) except that it also takes an additional inflow with a delay time. I have no idea what the commercial tools do. Responding to David.Whipp@infineon.com >(The pathologically pedantic may wish to consider the semantics >of delayed self-directed events ;-) ) FWIW, our architecture specifies that delayed events are never treated as "self-directed". Delayed events are placed in a separate ordered queue. As the delay time expires, they are moved to the end of the appropriate normal event queue. Self-directed events are placed at the front of a normal event queue thereby giving them priority over any pending events. I believe that the argument was something along the line of "Since other events can be processed before the delayed event occurs, giving self-directed priority to delayed events would yield the possibility that one such event could interfere with other self-directed events. If the delay time for a 'self-directed" delayed event expired between the generation of two other self-directed events for the same instance, what order should those events be dispatched?" Rather than deal with these types of issues, we made the blanket rule that delayed events are not self-directed and therefore not expedited. By the way, we also chose to make the architecture simpler by making the rule that multiple self-directed events generated in the same state action will be dispatched in the reverse of the order they are generated. (That way they can be placed on the front of the queue without the architecture having to hold them until the end of the state so it could push them in reverse order to make them come off the queue in the order they were generated.) Cop-out? Possibly. Has it ever been an issue? No. Have we ever had a reason to generate multiple self-directed events? To my knowledge, only once, and that was a somewhat contrived case which was done mostly because it could be done. Opinions? Does any of this constitute heresy? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Dana Simonson Section Manager, Engineering E. F. Johnson Company Waseca Operations Center 299 Johnson Ave SW Waseca, MN 56093-0514 dsimonson@efjohnson.com www.efjohnson.com Subject: Re: (SMU) Synchronous vs. Asynchronous behavior lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Hogan... > Wow! You make action descriptions visible on your state model? You > must have a lot of simple state models (which is a good thing). Our tool provides an action language rather than ADFDs but the state model description is the high level text description of the action. The tool allows one to toggle between the two action views. Generally we consider it a warning flag when an object starts to have more than half a dozen states and we revisit the abstraction. A lot of states and transitions often indicate that the object has disparate responsibilities representing different abstractions. > I understand your concern, but I'm afraid that the cat is already out of > the bag so to speak. I could be wrong, but my perception is that many > S-M camps already use synchronous services in a variety of ways. My > concern is how to establish guidelines to avoid abusing them (which may > be difficult). True, most tool makers offer synchronous services. Typically there are only a few enforced restrictions on what can be done in the SS. If an action language is used rather than ADFDs, an SS invoked from a bridge has to be able to generate events and write attributes. So if one is going to enforce rules about what one can do in SSes one has to parse SSes differently depending upon the context of invocation, which is cumbersome to do. It is also tempting to place common processing shared among multiple objects' actions in SSes in the interest of reuse. Unless one is limited to navigation, for the SS to do anything useful it would have to write attributes and generate events. Again, it is tempting to open Pandora's Box. Personally, I don't see this as a big win, though, because it really isn't that common except for relationship navigation. For example, one might want to generate the same event in three different places occasionally, but then it would be unlikely for the context in each action to be the same. Besides, our actions tend to be small, so there isn't a lot to reuse at that level anyway. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) is a self directed event special (was: Anonymous event lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > An state that unconditionally generates a self-directed > event will never remain in the state long enough to > receive an event from any other instance. Furthermore, > at the end of its action, the event to move it out > of the state is already at the front of self's queue. > So it seems reasonable to use the term "transient" to > describe this condition. True -- provided one has adopted the priority rule for self-directed events. However, until one adopts that rule to make the analyst's life easier for defining domain level flow of control, the state model looks exactly the same. There is nothing in the state model that makes the state or the transition special. But you don't agree with that because... > I think you need to follow this through. It is not > self-consistent. If another instance generates the event, > and you don't modify the state model, then the event will > be generated twice. Aha! Here it it is at last! You believe the generation of the event is part of the state model. I do not. Where one generates an event it is an issue for domain level flow of control. Where and when an event should be generated is determined by the raising of a circumstance in the state of the domain that is relevant to the flow of control. If the domain flow of control is changed, you move the event generation. > > I am not convinced this is a result of > > combining events. Consider a GUI domain in an air traffic > > control system where plane Icons that are a safe distance > > are green, those in danger of collision are red, and > > those that have just collided are black. An Icon object > > gets a Status message where the status value is 'collided', > > which transitions it to the Status Updated state (where it > > may already be). > > This all sounds (a) contrived and (b) polluted. Why do > Icons know about collisions of aircraft? Would you be happier if I said it receives a status value of 359? Icon doesn't have to know anything about the semantics of the value, only what to do with the value in -> Status Updated. Presumably whoever sent the event has some clearer notion of 'collided' to select that particular status, but event that isn't necessary. You seem to be proposing that every time information that has semantic context for the analyst but not the abstraction that it should be neutered, so that event data packet values have names like 'a', 'b', and 'c' in a Basic program. Taking this to an extreme, since the receiving state model should have no idea of the source of an event causing a transition, the events themselves should not have meaningful names to reflect things like the circumstance that caused them to be issued because that is only known to the sender. > If you do manage > to demonstate that its all one domain, then you'll still > need to explain why the action that detected the collision > (by looking at the distance) can't send a "planes collided" > event instead of a "status" event. Its a good bet that > a "status" event indicates incomplete analysis. The explanation is that the same event is sent if planes move from a Normal distance to an Iffy distance, or vice versa. This is precisely because Icon doesn't understand the semantics. It does the same basic thing (change the display) in each context, so it only needs one event and one state to deal with all the sender's contexts. > > My counter is that whoever is sending the status information > > may not know the semantics of the status or its significance > > to Icon. In particular, to send three events rather than one > > would require the sending entity to understand the internals > > of the Icon state machine, which I would regard as a major > > No-No. To me it is far more natural for Icon to receive a > > status value and decide, itself, what to do about it. > > But the sender will understand the mission of the domain. I think you have a different view of the abstractions than I do. The sender probably knows about distances, etc. sufficiently to determine the status of the level of danger of the planes. (Though it probably doesn't know about 'danger' per se; it simply knows that certain distances imply certain status values.) Icon has no clue about this, but it does know that it does something cute with the display for each value of the status, whatever it is status of. > In fact, the sender is an integral part of the domain's > mission (its in the domain). If the domain needs to > distinguish collisions from near misses, then the sender > (which must decode these conditions if it is to send > "collided" or "near miss" data for a status event) can > send different events. If the receiver wants to handle > two events in an identical way then it can have two > transitions between two states: one for collisions, the > other for near misses. This comes back to the notion of who determines the events required. I contend the receiver's transitions determine what events are to be generated and the analyst determines, based upon domain flow of control, where those events are generated. Not that the advantage of this view is that both sender and receiver state machines can be defined independently. The sender's state model is unaffected by inserting the event generator in a particular action. Similarly, the receiver does not depend upon where an event is generated. Under your scheme the sender state machine dictates the internals of the receiver's state machine (i.e., what states and transitions it has). [Admittedly, this is nice in theory but in practice the analyst is going to sneak the odd peak. My assertion is just that one is better off trying to keep things independent as much as possible because this encapsulates the implementation of the object's responsibilities (i.e., it state model).] -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Synchronous vs. Asynchronous behavior lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Fontana... > > But the *content* of the SS would not be visible. My point was > > that if those > > SSes are doing things like generating events, those activities > > that would have > > been visible are now hidden at the SM/OCM level. > > All OCM-interesting things that are done in a service are shown on the OCM - > from that service - just as if it was a state action. eg: on the Sequence > Chart, if a service generated an event, that event is shown. I guess I am not making myself clear here, so let me try a different tack. When I look at an OCM I see events that pass between objects, but not events that are self directed (a point I would like to see fixed, but that is in the other thread). I do not see attributes writes. When I look at an SM I see the state actions (descriptions or action language) in the state boxes. This shows me where every event is generated and where every attribute write occurs, as well as where every event goes -- provided I have no SSes that write attributes or generate events. If I have no SSes that that generate events or write to attributes, then I can understand the entire domain flow of control by looking at only those diagrams. However, if I have SSes that generate events or write attributes, I cannot see that in those diagrams. I have to look elsewhere to see the content of a specific SS. So if I have such SSes, I can no longer fully understand the domain flow of control by looking at the OCM and the SMs alone. In effect the SS is a state action whose description does not appear on the SM. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Synchronous vs. Asynchronous behavior Allen Theobald writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > I guess I am not making myself clear here, so let me try a different tack.... I must admit I'm a little confused. I wouldn't mind seeing summaries of the viewpoints of this thread. It's interesting, but too long for me to get my arms around. Falling through the cracks, Allen http://www.nova-eng.com Subject: Re: (SMU) OOA 96 Timers (was Synchronous vs. Asynchronous Bary D Hogan writes to shlaer-mellor-users: -------------------------------------------------------------------- > > "Dana Simonson" writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > FWIW, our tool implements delayed events using the process 'generate > after'. It is identical to a generate (any number of supplemental > parameters) except that it also takes an additional inflow with a delay > time. I have no idea what the commercial tools do. > > Responding to David.Whipp@infineon.com > >(The pathologically pedantic may wish to consider the semantics > >of delayed self-directed events ;-) ) > > FWIW, our architecture specifies that delayed events are never > treated as "self-directed". Delayed events are placed in a separate > ordered queue. As the delay time expires, they are moved to the end of > the appropriate normal event queue. Self-directed events are placed at > the front of a normal event queue thereby giving them priority over any > pending events. I think this makes sense. Since it is delayed anyway, there is already the possibility of other events being received before the delayed event, so the state model must handle that anyway. > > I believe that the argument was something along the line of "Since > other events can be processed before the delayed event occurs, giving > self-directed priority to delayed events would yield the possibility > that one such event could interfere with other self-directed events. > If the delay time for a 'self-directed" delayed event expired between > the generation of two other self-directed events for the same instance, > what order should those events be dispatched?" Rather than deal with > these types of issues, we made the blanket rule that delayed events are > not self-directed and therefore not expedited. Good point! Aren't most delayed events generated from the instance that receives them? Using the OOA92 style timers, I haven't seen any cases in which one object sets a timer for a different object. However, with the concept of delayed events, it may make more sense to generate a delayed event to another object's instance. Has anyone done this? Are there any perceived drawbacks? > > By the way, we also chose to make the architecture simpler by making > the rule that multiple self-directed events generated in the same state > action will be dispatched in the reverse of the order they are > generated. (That way they can be placed on the front of the queue > without the architecture having to hold them until the end of the state > so it could push them in reverse order to make them come off the queue > in the order they were generated.) Cop-out? Possibly. Has it ever > been an issue? No. Have we ever had a reason to generate multiple > self-directed events? To my knowledge, only once, and that was a > somewhat contrived case which was done mostly because it could be > done. I guess that if I were going to generate multiple self-directed events in a single state action, then I would expect them to come off in the same order as generated. However, I think the need would be so rare that it doesn't justify making the architecture more complicated. Thanks for the input! Bary Hogan Lockheed Martin Tactical Aircraft Systems Fort Worth, TX bary.d.hogan@lmco.com (817) 763-2620 Subject: Re: (SMU) Synchronous vs. Asynchronous behavior lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > In the simultaneous interpretation of time, you can't use > actions as an atomic unit. So you have to analysise the > sychonization even if you do synchronous creation or > deletion. > > How do you prevent simultaneous actions from messing things > up? You have to ensure that the analysis contains a > sequential constraint between all accesses to the data > and your modifications. Sequential constraint flows > along dataflows, events and transitions. SM is quite > weak when it comes to abstractions of synchronization. Now this is quite interesting -- we reverse our normal roles here. I would think that the view of time is an architectural choice. The logic of the solution in the OOA is always valid using the interleaved mode, so there should not be a need to modify it just to get the benefit of using the computing environment more efficiently. If the architecture supports the simultaneous view of time, then it should be up to the architecture to provide the guards on the attributes, navigations, and transitions to support that. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: RE: (SMU) is a self directed event special (was: Anonymous event David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- > True -- provided one has adopted the priority rule for > self-directed events. However, until one adopts that > rule to make the analyst's life easier for defining > domain level flow of control, the state model looks > exactly the same. There is nothing in the state model > that makes the state or the transition special. What are you trying to say? Are you saying that, if the expediation rule didn't exist, then there would be nothing special about self-directed events? If so, I agree. If you are saying that once you adopt that rule then self-directed events become special, then I also agree. > Aha! Here it it is at last! You believe the generation of > the event is part of the state model. I do not. But in another thread, you say: | When I look at an SM I see the state actions ... in the state | boxes. This shows me where every event is generated Actions are part of a state model; generators are part of actions. If you agree with these statements (and there is room for doubt); and if the "part-of" relationship is transitive (also questionable), then generators are part of the state model. I can accept the view that they are, or that they aren't. It isn't really very important. There does seem to be an association between them. I generally regard a change in an action to be a change to its enclosing state model. But I could accept an opposing view. > Where one generates an event it is an issue for domain > level flow of control. Where and when an event should be > generated is determined by the raising of a circumstance in > the state of the domain that is relevant to the flow of > control. If the domain flow of control is changed, you > move the event generation. Completely true. So what? Its a tautology: "the model changes" <==> "the model changes"! > > This all sounds (a) contrived and (b) polluted. Why do > > Icons know about collisions of aircraft? > > Would you be happier if I said it receives a status value of > 359? No. Its still polluted. I cannot conceive of any real-word system for air traffic control in which the detection of near misses is part of a GUI-interface domain. There seems little point in continuing this subthread. When you build on a false premise, you can prove anything you want: "false => true" is true. > You seem to be proposing that every time information that has > semantic context for the analyst but not the abstraction that > it should be neutered, so that event data packet values have > names like 'a', 'b', and 'c' in a Basic program. Taking this > to an extreme ... the events themselves should not have > meaningful names The lack of context in the abstraction proves that the two concepts belong in different domains. SM won't let you send an event to an object in another domain. If the event cannot exist, then it doesn't really matter how you name it, does it? > This comes back to the notion of who determines the events > required. I contend the receiver's transitions determine > what events are to be generated and the analyst determines, > based upon domain flow of control, where those events are > generated. Why should events depend on the internals of the receiver's state model? > Under your scheme the sender state machine dictates the > internals of the receiver's state machine (i.e., what states and > transitions it has). Abolutely not. You seem to have a very wierd interpretation of my ideas. You should be able to produce (a first cut approximation of) the OCM without knowing the details of the objects' state models. The domain dictates the events, not the objects. Dave -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: RE: (SMU) Synchronous vs. Asynchronous behavior David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman writes to shlaer-mellor-users: > Now this is quite interesting -- we reverse our normal roles > here. I would think that the view of time is an architectural > choice. If it were an architectural choice then it would be irrelevant. The view of time will effect the model, and the architecture must know which view is assumed by the model. > The logic of the solution in the OOA is always valid > using the interleaved mode, so there should not be a need to > modify it just to get the benefit of using the computing > environment more efficiently. If the architecture supports > the simultaneous view of time, then it should be up to the > architecture to provide the guards on the attributes, > navigations, and transitions to support that. If a model assumes interleaved time, then the architecture must provide the locks, etc., when it chooses to use a simultaneous implementation. If the model assumes simultaneous time, then the architecture does not need to provide the locks. In fact, it is arguable that it mustn't introduce locks, because to do so can introduce deadlock scenarios that the analyst cannot see. Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: Re: (SMU) OOA 96 Timers (was Synchronous vs. Asynchronousbehavior) lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Simonson... > FWIW, our architecture specifies that delayed events are never treated as "self-directed". Delayed events are placed in a separate ordered queue. As the delay time expires, they are moved to the end of the appropriate normal event queue. Self-directed events are placed at the front of a normal event queue thereby giving them priority over any pending events. > > I believe that the argument was something along the line of "Since other events can be processed before the delayed event occurs, giving self-directed priority to delayed events would yield the possibility that one such event could interfere with other self-directed events. If the delay time for a 'self-directed" delayed event expired between the generation of two other self-directed events for the same instance, what order should those events be dispatched?" Rather than deal with these types of issues, we made the blanket rule that delayed events are not self-directed and therefore not expedited. I agree but would argue it differently. I would not think of the delayed events as existing in the OOA until the delay time expires -- they only exist in the limbo of your separate ordered queue. They come into existence in the OOA context when they are placed on the normal event queue. At that point in time they become 'normal' events and should obey those rules. > By the way, we also chose to make the architecture simpler by making the rule that multiple self-directed events generated in the same state action will be dispatched in the reverse of the order they are generated. (That way they can be placed on the front of the queue without the architecture having to hold them until the end of the state so it could push them in reverse order to make them come off the queue in the order they were generated.) Cop-out? Possibly. Has it ever been an issue? No. Have we ever had a reason to generate multiple self-directed events? To my knowledge, only once, and that was a somewhat contrived case which was done mostly because it could be done. > > Opinions? Does any of this constitute heresy? I don't see a problem here in the interleaved view of time. The key issue is that in the end there is no way for the architecture to behave differently than the OOA assumption because nobody else can be placing events on the queue until the current action completes. By extension the same argument applies for self-directed events in the simultaneous view of time. That is, the instance cannot be plucking events off its queue while one of its actions is active. Therefore the result will be the same for self-directed events because their queue behaves the same way (relative to their instance) as an interleaved queue. As far as heresy is concerned, I don't see a major philosophical problem here. B-) Even if it doesn't work in some pathological case, that is a translation error rather than a methodology error. If the architecture doesn't fully satisfy the rules and constraints of the OOA, it is simply broken. That is, the issue is just whether the architecture implemented the OOA rules correctly. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Subtype vs. Supertype instances lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Riemenschneider... > Not long ago we had a discussion similar to this supertype/subtype one. I > don't know if we reached consensus or not since it seemed that the arguments > to what I said actually agreed with what I said. :-) I guess I must have > expressed myself badly, so I'll try again. (Glutton for punishment. ;-) ) > > You cannot delete the subtype instance without deleting the supertype > instance, because they are both THE instance. You cannot create a subtype > instance without creating a supertype instance, and you cannot delete a > subtype instance without deleting the supertype instance. This is perfectly > natural, because the subtype/supertype instance is an instance of the same > object. I think taking a view of them being seperate instances would make it > hard to accept that you could have instances of different subtypes of the same > supertype in existance at the same time. I am still bothered by the notion of having a supertype instance to delete. This is a problem because some CASE vendors have chosen to instantiate supertypes in their architectures as a convenience, particularly when dealing with non-OOPLs. So to get a handle on this discussion, we need to separate the CASE solutions from the OOA concepts. This implementation of CASE supertypes has bled back into the OOA with the need to deal with explicitly linking the is-a relationship in an action language. [Typically the CASE tools allow one to simply migrate the subtype without messing with the supertype, other than to relink the is-a relationship.] I do not like that because it encourages the notion that there really is a separate instance for the supertype -- which leads to the sort of confusion represented in this thread over subtype migration. It also allows incorrect OOAs if one is not religious about handling the supertype in the same action as the subtype. Regardless of what the CASE tools do, conceptually there is only one instance in the OOA -- the leaf subtype instance that is migrating. Note, in particular, that in an OOA there is never any need to deal with the is-a relationship because it is implicit in the fact that the subtype and supertype identifiers are identical at the instance level, regardless of the subtype. Those identifiers completely define the is-a relationship, just like referential attributes in non-is-a relationships. Similarly, Normal Form dictates that they are the same thing; you can't have two entity instances with the same identifiers. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Synchronous vs. Asynchronous (Lahman: OLC is 10 times lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lechner... Ah, an opportunity to wax philosophical... > In the old days, structured analysis produced a hierarchy of DataFlow Diags (DFDs) > to show the lateral flow at higher levels analogous to a hierarchy of State Models. > Analysts didn't want to drill down into the code, but just spec the required > info and behavior models and let programmers complete the implementation. > > Problem was: DFDs were not maintained thru changes, so they became obsolete and > were not trusted by maintenance. State models show control flow, not data flow, > but the flow is lateral thru event communications like data-flow diagrams. This was certainly a problem, but it is a problem with any bubble & arrows approach as well. S-M just forces the models to be maintained in a code generation environment. However, I think the maintenance problem lay in what you had to do, even if the DFDs were up to date. One situation is analogous to trying to make general purpose objects for reuse. The interfaces are the weakness -- it is simply too difficult to develop interfaces that can serve all clients once one gets out of the basic computer science data structures. Similarly, all those layers are effectively interfaces. So when one wants a change in behavior at the higher levels it becomes difficult to modify all those interfaces to get it to happen correctly. More importantly, though, to my argument is the problem that arises when one has to make elements of those low level layers talk to one another. Since their only common point of interface is some higher layer one must invoke, at the higher layer, one limb to get the data to pass down the other limb. Unfortunately that may involve invoking some inconvenient functionality in the first limb just to get the data for the second limb to chew upon. Fixing that problem often results in tearing up several acres of DFDs and re-decomposing all the layers in the two limbs. > Visibility is now turned inside out by info-hiding. Then, info structures > were exposed but process implementations were hidden inside of nodes. > Now we see state actions that decompose transaction processes into fragments, > but the info is less visible - (hidden inside action routines which access > data members). True, but the DFDs for those actions are only one layer deep. This is really crucial to understanding the scope of changes. For example, when I debug problems at least 75% of the time I can identify the action causing the problem before I ever go near the debugger -- just be inspecting at the state models. This is because I can see _all_ of the side effects in the events and abstract action descriptions (or adding yet another limb that is almost but not quite the same). > Now we have less of a need to know HOW the data is transformed, > but there is more need to produce declarative specs of interfaces > before implementation and more tools such as code generation to support them. > Lateral control flow models simplify the definition of pre- and post-conditions > which are more complete and more adequately express the behavior > we want built or revised and tested. True. But I think this is a natural outgrowth of the translationist philosophy. Since an S-M OOA is independent of implementation, it is not surprising that it is much closer to requirements analysis than programming. So people like Leslie Munday bend it a bit to do pure requirements specification via pre-/post-conditions without defining the solution to the problem. It is also the reason why many of the traditional worries of OT are not so relevant for S-M. Since S-M _assumes_ you are going to do code generation, the role OO constructs for cohesion and decoupling at the OOPL level are vastly reduced. One no longer cares about such things the same way a C++ programmer does not care about the rat's nest of branches in the machine code produced by the compiler. So S-M addresses decoupling at the component level with bridges and wormholes. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) is a self directed event special (was: Anonymous event lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > > True -- provided one has adopted the priority rule for > > self-directed events. However, until one adopts that > > rule to make the analyst's life easier for defining > > domain level flow of control, the state model looks > > exactly the same. There is nothing in the state model > > that makes the state or the transition special. > > What are you trying to say? > > Are you saying that, if the expediation rule didn't exist, > then there would be nothing special about self-directed > events? If so, I agree. Good. > If you are saying that once you adopt that rule then > self-directed events become special, then I also agree. Not so good. My point is that the rule for prioritization is to make things work for the _domain's_ flow of control -- the priority is over events from other objects. If the state wasn't special before the rule, it isn't special with the rule because the rule is not directly relevant to the state model itself. > > Aha! Here it it is at last! You believe the generation of > > the event is part of the state model. I do not. > > But in another thread, you say: > > | When I look at an SM I see the state actions ... in the state > | boxes. This shows me where every event is generated > > Actions are part of a state model; generators are part of > actions. If you agree with these statements (and there is > room for doubt); and if the "part-of" relationship is > transitive (also questionable), then generators are part > of the state model. > > I can accept the view that they are, or that they aren't. > It isn't really very important. There does seem to be an > association between them. I generally regard a change in > an action to be a change to its enclosing state model. > But I could accept an opposing view. I also said previously that I try to place the event generation after the state models have been defined and I try to base that on what the domain needs to have happen. When I look at the state models to see where events are generated, I am looking at the domain's flow of control -- hence the comment you quoted. If one accepts the view that event generation is a domain flow of control issue rather than a state model flow of control issue, then I think it is important. The implication is that self-directedness is a domain issue, not a state model issue. So there is nothing special about the state generating the event. > > > This all sounds (a) contrived and (b) polluted. Why do > > > Icons know about collisions of aircraft? > > > > Would you be happier if I said it receives a status value of > > 359? > > No. Its still polluted. I cannot conceive of any real-word > system for air traffic control in which the detection of > near misses is part of a GUI-interface domain. Where did it say that the event was generated in the domain? As I recall I referred to 'some other object' in the context of contrasting the sender's and receiver's view of the information. However, I think this is a red herring because the purpose of the example was to make it clear that the semantics of the event could be quite different between between sender and receiver. That could happen internally in any domain. > > This comes back to the notion of who determines the events > > required. I contend the receiver's transitions determine > > what events are to be generated and the analyst determines, > > based upon domain flow of control, where those events are > > generated. > > Why should events depend on the internals of the receiver's > state model? Let me try this another way. The states and the transitions define the state model and each transition has an event associated with it. My argument throughout has been that the state model should be defined as much as possible by the intrinsic behavior of the object. If one does that, then after the model is completed one has to define where in the domain those events should be generated. Which events are to be generated depends, then, exclusively on the receiver's state model transitions. But the issue of Where and When those events are generated becomes a domain flow of control issue. The analyst identifies the proper place to generate the event and places it in that state model action. But this comes full circle to my original objection that the state model action should not be doing things to determine who the receiver is. Since this is a domain flow of control issue, that navigation should be attached to a domain level artifact like the OCM rather than being placed in the sender state model. Now if I don't want the sender to know about where the event is going and I only placed the event in the sender to accommodate domain flow of control, then I certainly don't want the sender to be determining Which events to send based upon its local semantics. As soon as the sender starts doing that we have... > > Under your scheme the sender state machine dictates the > > internals of the receiver's state machine (i.e., what states and > > transitions it has). > > Abolutely not. You seem to have a very wierd interpretation > of my ideas. You should be able to produce (a first cut > approximation of) the OCM without knowing the details of > the objects' state models. The domain dictates the events, > not the objects. OK, here's how I saw your position. You suggested that the single Status event should be replaced by multiple events for the various status situations (i.e., replacing the generation of self-directed events). This is based upon the sender's semantic understanding of what is important. This would require reformulating the receiving state machine to handle multiple external events rather than a single event. At a minimum the event data packet and its processing in the action changes. In more complex cases whole states and transitions might be added/deleted. But since only the sender may understand that semantic (the more profound implications of the status), then the sender is dictating the structure of the receiver. We got here talking about my examples of situations where self-directed events might be necessary. You assert that in the case leading to this example they aren't necessary. My counter is that they aren't necessary only if one drives the event generation from the sender or chooses to design both state machines simultaneously so that both structures are dictated directly by their communication rather than the intrinsic behavior of each. I see value in designing the FSMs independently, so I see this as a justification for using self-directed events in some situations. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Synchronous vs. Asynchronous behavior lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > If a model assumes interleaved time, then the architecture > must provide the locks, etc., when it chooses to use a > simultaneous implementation. > > If the model assumes simultaneous time, then the architecture > does not need to provide the locks. In fact, it is arguable > that it mustn't introduce locks, because to do so can > introduce deadlock scenarios that the analyst cannot see. So the question is: why not always assume interleaved if it can be translated to simultaneous? It would certainly be easier on the analyst. B-) When I look at section 5.6 of OL:MWS, that introduction reads to me like it is talking about how the OOA notation supports two views of concurrency _in the computing environment_. That view seems contradicted in the following subsections, though. So I am unsure about whether that decision needs to be made in the OOA. I don't think the 'rules for actions' affect the way that the analyst solves the problem under either interpretation per se -- they merely define the context (e.g., I could easily interpret 3a and 3b as advisories about what the architecture has to worry about). Similarly, the rules for events don't seem to be affected by either interpretation if one is assuming asynchronous events. In contrast, the subsection on 'rules for data consistency' identifies some potential problems. One can also dream up problems with relationship navigation or duplicate instances if actions are simultaneously creating, deleting, or migrating instances. But resolving these problems comes down to a rather mechanical process of action blocking (or data locking, relationship locking, instance locking, whatever). This doesn't seem appropriate to an OOA since the need for it depends in large part on the nature of the S-M notation rather than the problem space. It seems to me that this can be done rather generically (though not necessarily easily) in the architecture after keeping track of what is being touched in potentially dangerous manner. But trying to do the same thing in the OOA would probably be pretty ugly (i.e., extra states, transitions, and flag attributes that were ubiquitous among the state models). It also strikes me as an invitation to implementation pollution ("I think I need a Semaphore object to..."). So I don't see a compelling reason for making the time view decision in the OOA. OTOH, we haven't encountered a situation where it was necessary in either the OOA or the architecture, so this view may be just a tad naive. B-) -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: RE: (SMU) Synchronous vs. Asynchronous behavior Bary D Hogan writes to shlaer-mellor-users: -------------------------------------------------------------------- Sorry about the slow response.... had to do some real work... > David.Whipp@infineon.com writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Bary D Hogan writes to shlaer-mellor-users: > > In general, it seems that the best way to ensure this is to accomplish > > the deletion and creation within the same action. If the supertype > > accepts polymorphic events which can arrive at any time, then > > I need to > > be in one subtype or the other when an event is accepted. Somehow it > > seems wrong to me to allow the supertype instance to exist with no > > subtype instances (or multiple subtype instances) for some period of > > time. I can protect it within the context of a state action. > > How do I protect it outside of that context? > > In the simultaneous interpretation of time, you can't use > actions as an atomic unit. So you have to analysise the > sychonization even if you do synchronous creation or > deletion. > > How do you prevent simultaneous actions from messing things > up? You have to ensure that the analysis contains a > sequential constraint between all accesses to the data > and your modifications. Sequential constraint flows > along dataflows, events and transitions. SM is quite > weak when it comes to abstractions of synchronization. > First, I want to address protection of the instance state machine: Considering rules about state actions and events, some are unchanged by the view of time. Only one action of a state machine can be in execution at any point in time; an action of a state machine must complete before another event can be accepted by that state machine; when an instance completes an action it is now in the new state; etc. In a role migration situation, deleting one subtype and generating an event to create the next subtype creates some difficulties enforcing these concepts. When you delete one subtype instance and generate the create event, then the overall instance (real world instance) is not in _ANY_ state. I think this violates the rules about actions and events, specifically the last one I mentioned. This also opens up the possibility of accepting one or more polymorphic events by the supertype instance while no corresponding subtype instance exists. Stated another way, you should be able think about the overall instance as having one state machine which is defined by a combination of the state models of all of the subtypes. In this view, the delete state of one subtype and the create state of the next subtype are effectively combined into one state. We need a mechanism to represent this in the analysis. This is my primary argument for creating the next subtype synchronously. I extend that argument to say that a synchronous service should be used when there is anything significant going on with the create such that doing all of the create action from the deleting subtype would cause me to violate the one fact in one place principle. As for consistent data: The problem of data consistency when using the simultaneous view of time exists no matter how you accomplish the subtype migration (synchronous or asynchronous), so I don't think it is determining factor when deciding how to do subtype migration. The method does clearly state that it is the responsibility of the analyst to ensure that data required by an action is consistent. I tend to agree with Lahman in that doing this in the OOA could get pretty ugly in the simultaneous view of time. > > > Yes, putting it in the creation state is a perfect place to put it, > > unless I'm creating it synchronously. :-) With synchronous creations, > > the instance is created in a state, but the state is not executed (at > > least that's the way I understand it). > > You could create it synchronously then send an event to it ;-). > You could, but that might bring up other issues. If the event you generate is not considered self-directed, then you have to deal with the possibility of receiving other polymorphic events prior to this event. That could be hard to deal with since you haven't finished all the creation actions. > > > I agree that multiple creation or deletion states are > > needed because they do different things. I don't have a > > specific example in mind, but it seems fairly plausible > > that they would also have common things that would be > > better documented as a common service and not duplicated. > > Plausible, yes. But I can't think of any examples either. > If something it really needed, then it shouldn't be too > difficult for its advocate's to find one example... If it's plausible, then I think it is valid to talk about what to do if and when that situation comes up. I have several real examples, none of which I can broadcast on this group. I only brought this up to point out that subtype migration is not the only case in which common creation actions might be better documented as synchronous services. Bary Hogan Lockheed Martin Tactical Aircraft Systems Fort Worth, TX bary.d.hogan@lmco.com (817) 763-2620 Subject: Re: (SMU) OOA 96 Timers (was Synchronous vs. Asynchronousbehavior) Erick.Hagstrom@envoy.com writes to shlaer-mellor-users: -------------------------------------------------------------------- I have to disagree. This is not a violation of the self-directed event rule. However, it is a violation of the single sender-receiver pair rule: events between a particular sender instance and particular receiver instance must be received in the same order as they were sent. In this case the sender and receiver are the same instance. > By the way, we also chose to make the architecture simpler by making the rule that multiple self-directed events generated in the same state action will be dispatched in the reverse of the order they are generated. (That way they can be placed on the front of the queue without the architecture having to hold them until the end of the state so it could push them in reverse order to make them come off the queue in the order they were generated.) Cop-out? Possibly. Has it ever been an issue? No. Have we ever had a reason to generate multiple self-directed events? To my knowledge, only once, and that was a somewhat contrived case which was done mostly because it could be done. > > Opinions? Does any of this constitute heresy? I don't see a problem here in the interleaved view of time. The key issue is that in the end there is no way for the architecture to behave differently than the OOA assumption because nobody else can be placing events on the queue until the current action completes. By extension the same argument applies for self-directed events in the simultaneous view of time. That is, the instance cannot be plucking events off its queue while one of its actions is active. Therefore the result will be the same for self-directed events because their queue behaves the same way (relative to their instance) as an interleaved queue. As far as heresy is concerned, I don't see a major philosophical problem here. B-) Even if it doesn't work in some pathological case, that is a translation error rather than a methodology error. If the architecture doesn't fully satisfy the rules and constraints of the OOA, it is simply broken. That is, the issue is just whether the architecture implemented the OOA rules correctly. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Subtype vs. Supertype instances Erick.Hagstrom@envoy.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Incorrect on two counts. 1) There really is a separate instance for the supertype in OOA. It is a distinct entity on the OIM and participates in a relationship with its subtypes. It has identifiers. It can carry data attributes. It can even have a state model, or pieces of one. 2) There is no violation of normal form. Two objects can have the same identifiers--check almost any 1:1 relationship on almost any OIM for an example. There is no OOA normal form rule which prohibits this situation. lahman on 12/12/99 11:23:20 AM Please respond to shlaer-mellor-users@projtech.com To: shlaer-mellor-users@projtech.com cc: (bcc: Erick Hagstrom/Nashville/Envoy) Subject: Re: (SMU) Subtype vs. Supertype instances lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Riemenschneider... > Not long ago we had a discussion similar to this supertype/subtype one. I > don't know if we reached consensus or not since it seemed that the arguments > to what I said actually agreed with what I said. :-) I guess I must have > expressed myself badly, so I'll try again. (Glutton for punishment. ;-) ) > > You cannot delete the subtype instance without deleting the supertype > instance, because they are both THE instance. You cannot create a subtype > instance without creating a supertype instance, and you cannot delete a > subtype instance without deleting the supertype instance. This is perfectly > natural, because the subtype/supertype instance is an instance of the same > object. I think taking a view of them being seperate instances would make it > hard to accept that you could have instances of different subtypes of the same > supertype in existance at the same time. I am still bothered by the notion of having a supertype instance to delete. This is a problem because some CASE vendors have chosen to instantiate supertypes in their architectures as a convenience, particularly when dealing with non-OOPLs. So to get a handle on this discussion, we need to separate the CASE solutions from the OOA concepts. This implementation of CASE supertypes has bled back into the OOA with the need to deal with explicitly linking the is-a relationship in an action language. [Typically the CASE tools allow one to simply migrate the subtype without messing with the supertype, other than to relink the is-a relationship.] I do not like that because it encourages the notion that there really is a separate instance for the supertype -- which leads to the sort of confusion represented in this thread over subtype migration. It also allows incorrect OOAs if one is not religious about handling the supertype in the same action as the subtype. Regardless of what the CASE tools do, conceptually there is only one instance in the OOA -- the leaf subtype instance that is migrating. Note, in particular, that in an OOA there is never any need to deal with the is-a relationship because it is implicit in the fact that the subtype and supertype identifiers are identical at the instance level, regardless of the subtype. Those identifiers completely define the is-a relationship, just like referential attributes in non-is-a relationships. Similarly, Normal Form dictates that they are the same thing; you can't have two entity instances with the same identifiers. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Synchronous vs. Asynchronous behavior lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Hogan... > In a role migration situation, deleting one subtype and generating an > event to create the next subtype creates some difficulties enforcing > these concepts. When you delete one subtype instance and generate the > create event, then the overall instance (real world instance) is not in > _ANY_ state. I think this violates the rules about actions and events, > specifically the last one I mentioned. This also opens up the > possibility of accepting one or more polymorphic events by the > supertype instance while no corresponding subtype instance exists. I am not sure there is a technical violation. The create event is not directed to an instance but to the object, so the rules about events and instances do not apply. The methodology also cops out by saying that consistency does not necessarily have to be enforced at the action boundaries. However, if the domain is not consistent at the action boundary, then the analyst is obligated to ensure that no harm will arise by processing whatever events are available until the system is consistent again. The polymorphic event clearly presents a problem for doing this. Therefore the analyst would have to effectively assure that no polymorphic event could be pending when the subtype migration takes place -- a rather daunting task in the general case. [One thing that would help would be to give priority to create events over normal events to any of the object's instances or, perhaps, absolute priority to create events in the domain. I would want to think a bit about that one, though, because it could impose performance problems for the architecture in the simultaneous view of time.] While there may not be a technical violation, I certainly agree this is the practical reason for trying to use synchronous creates wherever possible. > Stated another way, you should be able think about the overall instance > as having one state machine which is defined by a combination of the > state models of all of the subtypes. In this view, the delete state of > one subtype and the create state of the next subtype are effectively > combined into one state. We need a mechanism to represent this in the > analysis. I don't think this solves the problem of the pending events, which is the sticking point. If they get executed first, they cause a transition from the 'delete/create' state to...? If it is in the new model, then we have a paradox because the create event's action has not been executed yet and the migrated model's states shouldn't be there yet. OTOH, if the analyst assures there can be no pending events when the create event is generated, then there is no need to think in these terms -- everything will Just Work without worrying about it. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) OOA 96 Timers (was Synchronous vs. "Dana Simonson" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Erick.Hagstrom@envoy.com >I have to disagree. This is not a violation of the self-directed event rule. >However, it is a violation of the single sender-receiver pair rule: events >between a particular sender instance and particular receiver instance must be >received in the same order as they were sent. In this case the sender and >receiver are the same instance. Is it? The self-directed event rule from OOA96 states, "If an instance of an object sends an event to itself, that event will be accepted before any other events that have yet to be accepted by the same instance." If I generate two self-directed events in the same state, does the first one generated fall under this rule? If it does, then the second one generated MUST be accepted before the first. The reason I brought this up in the first place is that I believe you can argue both sides. Either 1) self directed events take precedence over all pending events including other pending self-directed events, or 2) multiple self-directed events take precedence over 'regular' events, but must obey the sender-receiver pair rule amongst themselves. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Dana Simonson Section Manager, Engineering E. F. Johnson Company Waseca Operations Center 299 Johnson Ave SW Waseca, MN 56093-0514 dsimonson@efjohnson.com www.efjohnson.com Subject: Re: (SMU) Subtype vs. Supertype instances lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Hagstrom... > Incorrect on two counts. 1) There really is a separate instance for the > supertype in OOA. It is a distinct entity on the OIM and participates in a > relationship with its subtypes. It has identifiers. It can carry data > attributes. It can even have a state model, or pieces of one. One has to be careful to distinguish between the notation and the instantiation. For example, in S-M the supertype state model is simply a notational convenience to eliminate the redundant specification of common states and transitions in the leaf objects; the intent was definitely not for independent instantiation (clarified by Sally Shlaer on this forum a few years back). In an Entity Relationship Diagram, the forerunner of the OIM, the supertype was used as a convenient mechanism for describing data hierarchies in RDBs. However, in the RDB instantiation the tables were the union of attributes from the subtypes and supertypes. The is-a relationship in the ERD was used instead for optimizing index structures for more efficient access of data unique to the subtypes. > 2) There is no > violation of normal form. Two objects can have the same identifiers--check > almost any 1:1 relationship on almost any OIM for an example. There is no OOA > normal form rule which prohibits this situation. I am afraid I do not understand your point about the 1:1 relationship. The objects that are related in that situation always have different keys (unless they are reflexive). My point was about keys and the attributes uniquely defined by them. NF1 defines a key as A -> B such that if B is a set of attributes, then B is uniquely determined by A. If we have a second set of attributes, C, such that A -> C, then B and C are transitively dependent on A per NF3 so we have A -> (B U C). Thus if an object is instantiated with a particular key value, the union of all of the attributes that are uniquely determined by that key value must be collected with it in the instantiation. BTW, that is not to say that a particular implementation can't instantiate the supertype separately in the architecture. However, if that is done, then special care must be taken to ensure that the translated code behaves as if there was a single instantiation in all possible circumstances. This is not easy to do, particularly in create/delete situations. So the CASE tools that do this tend to palm the problem back off on the analyst by making the analyst enforce relationship consistency in the OOA through explicitly linking/unlinking all the relationships, including the is-a. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Synchronous vs. Asynchronous behavior Bary D Hogan writes to shlaer-mellor-users: -------------------------------------------------------------------- > lahman writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Responding to Hogan... > > > In a role migration situation, deleting one subtype and generating an > > event to create the next subtype creates some difficulties enforcing > > these concepts. When you delete one subtype instance and generate the > > create event, then the overall instance (real world instance) is not in > > _ANY_ state. I think this violates the rules about actions and events, > > specifically the last one I mentioned. This also opens up the > > possibility of accepting one or more polymorphic events by the > > supertype instance while no corresponding subtype instance exists. > > I am not sure there is a technical violation. The create event is not > directed to an instance but to the object, so the rules about events and > instances do not apply. The methodology also cops out by saying that > consistency does not necessarily have to be enforced at the action > boundaries. However, if the domain is not consistent at the action boundary, > then the analyst is obligated to ensure that no harm will arise by processing > whatever events are available until the system is consistent again. Maybe I should have said it violates the spirit of the rules. I can agree with you that it doesn't violate the rules technically. I'm thinking of the real world instance (supertype and subtype instance combination in the OOA) going through its lifecycle. I think that the analysis rules should force this real world instance to always be in a state until it is really deleted. > > The polymorphic event clearly presents a problem for doing this. Therefore > the analyst would have to effectively assure that no polymorphic event could > be pending when the subtype migration takes place -- a rather daunting task in > the general case. [One thing that would help would be to give priority to > create events over normal events to any of the object's instances or, perhaps, > absolute priority to create events in the domain. I would want to think a bit > about that one, though, because it could impose performance problems for the > architecture in the simultaneous view of time.] Agreed. In the general case, I'm not sure it is even possible to ensure that there is no polymorphic event pending when the migration happens. The event that caused the migration could have been polymorphic and there could already be other polymorphic events pending. > > While there may not be a technical violation, I certainly agree this is the > practical reason for trying to use synchronous creates wherever possible. > > > Stated another way, you should be able think about the overall instance > > as having one state machine which is defined by a combination of the > > state models of all of the subtypes. In this view, the delete state of > > one subtype and the create state of the next subtype are effectively > > combined into one state. We need a mechanism to represent this in the > > analysis. > > I don't think this solves the problem of the pending events, which is the > sticking point. If they get executed first, they cause a transition from the > 'delete/create' state to...? If it is in the new model, then we have a > paradox because the create event's action has not been executed yet and the > migrated model's states shouldn't be there yet. OTOH, if the analyst assures > there can be no pending events when the create event is generated, then there > is no need to think in these terms -- everything will Just Work without > worrying about it. I was only trying to point out that instead of using migrating subtypes, I should be able to model the state model in the supertype (one huge state model), and this should be an equivalent solution (more or less). There could still be states in which you delete one subtype and create the next one, but the instance would never be in limbo as far as having a current state when an event is accepted. Bary Hogan Lockheed Martin Tactical Aircraft Systems Fort Worth, TX bary.d.hogan@lmco.com (817) 763-2620 Subject: RE: (SMU) OOA 96 Timers (was Synchronous vs. Asynchronous Bary D Hogan writes to shlaer-mellor-users: -------------------------------------------------------------------- > David.Whipp@infineon.com writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > > I not sure I understand what you mean by "if you want timer > > functionality then you have to use standard wormholes/bridging". > > Are you talking about bridging to/from the architecture domain > > to provide the timer functionality? > > The interface to pre-OOA96 timers can be realised using OOA96 > wormholes. Thus you can still use the interface in OOA96. > However, you will need to implement the timer in your own > service domain because the architecture is no longer > required to provide it. Once you have your own implementation, > then you can tweak it to do whatever you want. > Dave, I'm still not sure I understand this. Are you saying that providing the mechanisms for delayed events is _not_ the responsibility of the architecture domain. If not, then why? I thought that it would be rather natural to put this in the architecture. Thanks, Bary Hogan Lockheed Martin Tactical Aircraft Systems Fort Worth, TX bary.d.hogan@lmco.com (817) 763-2620 Subject: Re: (SMU) OOA 96 Timers (was Synchronous vs. Erick.Hagstrom@envoy.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Interesting point. The view that I have been taking, essentially a FIFO view of self-directed events, leads to a violation of the self-directed event rule as written. But the other view, a LIFO view of self-directed events, leads to a violation of the single sender/receiver pair rule. There are three ways out: 1) Allow the ambiguity in the method, let the analyst pick, and make sure you use an architecture that supports the adopted view, similar to the existing latitude allowed in the view of time. 2) Prohibit multiple self-directed events generated in a single execution of a single state action. 3) Reformulate the rules to remove the ambiguity. Anyone have a preference? In other words, a) does anybody really need to do this, and b) are both the LIFO and the FIFO approaches necessary under some circumstances? "Dana Simonson" on 12/15/99 08:13:19 AM Please respond to shlaer-mellor-users@projtech.com To: shlaer-mellor-users@projtech.com cc: (bcc: Erick Hagstrom/Nashville/Envoy) Subject: Re: (SMU) OOA 96 Timers (was Synchronous vs. Asynchronousbehavior) "Dana Simonson" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Erick.Hagstrom@envoy.com >I have to disagree. This is not a violation of the self-directed event rule. >However, it is a violation of the single sender-receiver pair rule: events >between a particular sender instance and particular receiver instance must be >received in the same order as they were sent. In this case the sender and >receiver are the same instance. Is it? The self-directed event rule from OOA96 states, "If an instance of an object sends an event to itself, that event will be accepted before any other events that have yet to be accepted by the same instance." If I generate two self-directed events in the same state, does the first one generated fall under this rule? If it does, then the second one generated MUST be accepted before the first. The reason I brought this up in the first place is that I believe you can argue both sides. Either 1) self directed events take precedence over all pending events including other pending self-directed events, or 2) multiple self-directed events take precedence over 'regular' events, but must obey the sender-receiver pair rule amongst themselves. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Dana Simonson Section Manager, Engineering E. F. Johnson Company Waseca Operations Center 299 Johnson Ave SW Waseca, MN 56093-0514 dsimonson@efjohnson.com www.efjohnson.com Subject: Re: (SMU) Synchronous vs. Asynchronous behavior lschneid@eng.delcoelect.com (Lee Riemenschneider) writes to shlaer-mellor-users: -------------------------------------------------------------------- Bary D Hogan writes to shlaer-mellor-users: -------------------------------------------------------------------- >> lahman writes to shlaer-mellor-users: >> -------------------------------------------------------------------- >> >> Responding to Hogan... >> >> > In a role migration situation, deleting one subtype and generating an >> > event to create the next subtype creates some difficulties enforcing >> > these concepts. When you delete one subtype instance and generate the >> > create event, then the overall instance (real world instance) is not in >> > _ANY_ state. I think this violates the rules about actions and events, >> > specifically the last one I mentioned. This also opens up the >> > possibility of accepting one or more polymorphic events by the >> > supertype instance while no corresponding subtype instance exists. >> >> I am not sure there is a technical violation. The create event is not >> directed to an instance but to the object, so the rules about events and >> instances do not apply. The methodology also cops out by saying that >> consistency does not necessarily have to be enforced at the action >> boundaries. However, if the domain is not consistent at the action boundary, >> then the analyst is obligated to ensure that no harm will arise by processing >> whatever events are available until the system is consistent again. > >Maybe I should have said it violates the spirit of the rules. I can >agree with you that it doesn't violate the rules technically. I'm >thinking of the real world instance (supertype and subtype instance >combination in the OOA) going through its lifecycle. I think that the >analysis rules should force this real world instance to always be in a >state until it is really deleted. > In OL:MWS, it seems to indicate, that in subtype migration cases, the lifecycle EITHER exists in the supertype OR in the subtypes. To me, this either/or seems to represent the real world case of a migratory subtype. Can there be cases where migratory subtypes share a common portion of a lifecycle in the supertype? "Splicing" is covered for the non-migrating subtypes in OL:MWS. Not allowing splicing in the migratory sub/supertype lifecycles would prevent the aforementioned "polymorphic events" in the absence of a subtype from happening. Allowing splicing in migratory cases would be to say that you have events directed at an instance that are not aware of it's present "condition". Didn't we have a discussion on this already? ;-) =========================================================================== Lee W. Riemenschneider lee.w.riemenschneider@delphiauto.com Software Engineer OS/2 user PURDUE alumnus and die-hard FAN! =========================================================================== Subject: RE: (SMU) OOA 96 Timers (was Synchronous vs. David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Erick.Hagstrom@envoy.com wrote > 1) Allow the ambiguity in the method, let the analyst pick, > and make sure you use an architecture that supports the > adopted view, similar to the existing > latitude allowed in the view of time. > 2) Prohibit multiple self-directed events generated in a > single execution of a single state action. > 3) Reformulate the rules to remove the ambiguity. > > Anyone have a preference? As you may have guessed from a previous thread, my point of view is that one rule applies to events directed to an an identified instance, and the other applies to events directed at self. We don't need to do any of the above because the current rules are correctly formulated. Consider the OOA-of-OOA: you could have a supertype named "Event" with subtypes "Instance-to-Instance Event" and "Self-Directed Event" (better names probably could be found). Then, one rule defines the behaviour of one subtype, the other defines the behaviour of the other. Two subtypes, two behaviors: no problem! Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: RE: (SMU) OOA 96 Timers (was Synchronous vs. "Lynch, Chris D. SDX" writes to shlaer-mellor-users: -------------------------------------------------------------------- >Erick.Hagstrom@envoy.com writes to shlaer-mellor-users: >-------------------------------------------------------------------- >Interesting point. The view that I have been taking, essentially a FIFO view of >self-directed events, leads to a violation of the self-directed event rule as >written. But the other view, a LIFO view of self-directed events, leads to a >violation of the single sender/receiver pair rule. There are three ways out: > >1) Allow the ambiguity in the method, let the analyst pick, and make sure you >use an architecture that supports the adopted view, similar to the existing >latitude allowed in the view of time. >2) Prohibit multiple self-directed events generated in a single execution of a >single state action. >3) Reformulate the rules to remove the ambiguity. > >Anyone have a preference? In other words, a) does anybody really need to do >this, and b) are both the LIFO and the FIFO approaches necessary under some >circumstances? Oh, good, an election! Why not get rid of the priority rule? Namely, the method could allow action-on-transition within the Moore model and thus eliminate 99.9% of self-directed events. The rest of the self-directed events (i.e., the real ones) should not need prioritization over events from the outside. I think event priorities are out of synch with the spirit of the method. -Chris --------------------------------------------------- Chris Lynch Abbott AIS San Diego CA lynchcd@hpd.abbott.com "If you're as clever as you can be when you design it, how will you ever debug it?" Kernighan and Plauger --------------------------------------------------- Subject: Re: (SMU) Subtype vs. Supertype instances Erick.Hagstrom@envoy.com writes to shlaer-mellor-users: -------------------------------------------------------------------- I think we'll have to agree to disagree on this point. Example: in OOSA:MWD, section 6.1 discusses the supertype/subtype relationship. They repeatedly refer to the "subtype object" and the "supertype object". They do not so much as hint that the supertype object is any less an entity than the subtype object. Personal conversations with Steve Mellor further lead me to adopt the view that each is a real object. lahman on 12/15/99 08:36:37 AM Please respond to shlaer-mellor-users@projtech.com To: shlaer-mellor-users@projtech.com cc: (bcc: Erick Hagstrom/Nashville/Envoy) Subject: Re: (SMU) Subtype vs. Supertype instances lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Hagstrom... > Incorrect on two counts. 1) There really is a separate instance for the > supertype in OOA. It is a distinct entity on the OIM and participates in a > relationship with its subtypes. It has identifiers. It can carry data > attributes. It can even have a state model, or pieces of one. One has to be careful to distinguish between the notation and the instantiation. For example, in S-M the supertype state model is simply a notational convenience to eliminate the redundant specification of common states and transitions in the leaf objects; the intent was definitely not for independent instantiation (clarified by Sally Shlaer on this forum a few years back). In an Entity Relationship Diagram, the forerunner of the OIM, the supertype was used as a convenient mechanism for describing data hierarchies in RDBs. However, in the RDB instantiation the tables were the union of attributes from the subtypes and supertypes. The is-a relationship in the ERD was used instead for optimizing index structures for more efficient access of data unique to the subtypes. Subject: Re: (SMU) Subtype vs. Supertype instances Erick.Hagstrom@envoy.com writes to shlaer-mellor-users: -------------------------------------------------------------------- NF1 does not discuss keys. NF1 is characterized by a lack of repeating groups of values (SM 1st rule) and a lack of internal structure within attributes (SM 2nd rule). NF2 addresses compound identifiers: "every attribute that is not part of the identifier represents a characteristic of the entire object, and not a characteristic of something which would be identified by only a part of the identifier" OOSA:MWD p. 44 (SM 3rd rule). NF3 prohibits attributes dependent on a foreign key from residing in the object (SM 4th rule). None of these forms prohibit dividing attributes between two objects with the same key--in fact, there are higher normal forms which actually encourage such things. In fact, OOSA:MWD has an example in which two objects have the same key. In appendix A on p. 110 there is an OIM for a Magnetic Tape Management system. There are two objects, TSAR and TAPE ROBOT, related by a simple 1:1 relationship. The ID for TSAR is, not surprisingly, TSAR ID. The relationship is formalized on the TAPE ROBOT side using the TSAR ID attribute. Here's the surprise: TAPE ROBOT uses that same TSAR ID attribute as its ID. BTW, I would speculate that this situation might arise primarily in situations such as the above, when an object uses a relational attribute as its ID. lahman on 12/15/99 08:36:37 AM Please respond to shlaer-mellor-users@projtech.com To: shlaer-mellor-users@projtech.com cc: (bcc: Erick Hagstrom/Nashville/Envoy) Subject: Re: (SMU) Subtype vs. Supertype instances lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Hagstrom... > 2) There is no > violation of normal form. Two objects can have the same identifiers--check > almost any 1:1 relationship on almost any OIM for an example. There is no OOA > normal form rule which prohibits this situation. I am afraid I do not understand your point about the 1:1 relationship. The objects that are related in that situation always have different keys (unless they are reflexive). My point was about keys and the attributes uniquely defined by them. NF1 defines a key as A -> B such that if B is a set of attributes, then B is uniquely determined by A. If we have a second set of attributes, C, such that A -> C, then B and C are transitively dependent on A per NF3 so we have A -> (B U C). Thus if an object is instantiated with a particular key value, the union of all of the attributes that are uniquely determined by that key value must be collected with it in the instantiation. Subject: RE: (SMU) OOA 96 Timers (was Synchronous vs. "Leslie A. Munday" writes to shlaer-mellor-users: -------------------------------------------------------------------- I can't fault your argument, below, that the method rules are unambiguous as they stand (mainly because I haven't been following th thread closely enough), but I want to vote anyway. And my vote goes with Chris's suggestion - let's allow action-on-transitions within the Moore model. Anyone surprised? Thought not :-) Leslie. -- On Wed, 15 Dec 1999 10:10:57 David.Whipp wrote: >David.Whipp@infineon.com writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >Erick.Hagstrom@envoy.com wrote >> 1) Allow the ambiguity in the method, let the analyst pick, >> and make sure you use an architecture that supports the >> adopted view, similar to the existing >> latitude allowed in the view of time. >> 2) Prohibit multiple self-directed events generated in a >> single execution of a single state action. >> 3) Reformulate the rules to remove the ambiguity. >> >> Anyone have a preference? > >As you may have guessed from a previous thread, my point of view >is that one rule applies to events directed to an an identified >instance, and the other applies to events directed at self. We >don't need to do any of the above because the current rules are >correctly formulated. > >Consider the OOA-of-OOA: you could have a supertype named >"Event" with subtypes "Instance-to-Instance Event" and >"Self-Directed Event" (better names probably could be found). >Then, one rule defines the behaviour of one subtype, the other >defines the behaviour of the other. Two subtypes, two >behaviors: no problem! > > >Dave. >-- >Dave Whipp, Senior Verification Engineer >Infineon Technologies Corp., San Jose, CA 95112 >mailto:David.Whipp@infineon.com tel. (408) 501 6695 >Opinions are my own. Factual statements may be in error. > __________________________________________________________________ Get your own free England E-mail address at http://www.england.com Subject: Re: (SMU) OOA 96 Timers (was Synchronous vs. Bob Lechner writes to shlaer-mellor-users: -------------------------------------------------------------------- Dave, please clarify if this means you can have your cake and eat it too? (Suppose an instance-to-instance event is deliberately directed to your own object-Id rather than to another object: this could mean the event is NOT treated as self-directed. This would be possible if a self-directed event was a distinct subtype or otherwise distinguishable from a non-self-directed event in some other way than by comparing the destination objectId to this or self???) Thanks, Bob Lechner UMass-Lowell lechner@cs.uml.edu > > David.Whipp@infineon.com writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Erick.Hagstrom@envoy.com wrote > > 1) Allow the ambiguity in the method, let the analyst pick, > > and make sure you use an architecture that supports the > > adopted view, similar to the existing > > latitude allowed in the view of time. > > 2) Prohibit multiple self-directed events generated in a > > single execution of a single state action. > > 3) Reformulate the rules to remove the ambiguity. > > > > Anyone have a preference? > > As you may have guessed from a previous thread, my point of view > is that one rule applies to events directed to an an identified > instance, and the other applies to events directed at self. We > don't need to do any of the above because the current rules are > correctly formulated. > > Consider the OOA-of-OOA: you could have a supertype named > "Event" with subtypes "Instance-to-Instance Event" and > "Self-Directed Event" (better names probably could be found). > Then, one rule defines the behaviour of one subtype, the other > defines the behaviour of the other. Two subtypes, two > behaviors: no problem! > > > Dave. > -- > Dave Whipp, Senior Verification Engineer > Infineon Technologies Corp., San Jose, CA 95112 > mailto:David.Whipp@infineon.com tel. (408) 501 6695 > Opinions are my own. Factual statements may be in error. > Subject: RE: (SMU) OOA 96 Timers (was Synchronous vs. Asynchronous behavio David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Bob Lechner writes to shlaer-mellor-users: > > Dave, please clarify if this means you can have your cake and > eat it too? Yum, Yum: this cake is good. Wow, its still there! > (Suppose an instance-to-instance event is deliberately directed > to your own object-Id rather than to another object: this could mean > the event is NOT treated as self-directed. This would be possible if > a self-directed event was a distinct subtype or otherwise > distinguishable from a non-self-directed event in some other way > than by comparing the destination objectId to this or self???) Correct. In previous posts I've used the term "coincidentally directed at self" to contrast with "explicitly self-directed" Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: Re: (SMU) OOA 96 Timers (was Synchronous vs.Asynchronousbehavior) lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Simonson... > Is it? The self-directed event rule from OOA96 states, "If an instance of an object sends an event to itself, that event will be accepted before any other events that have yet to be accepted by the same instance." If I generate two self-directed events in the same state, does the first one generated fall under this rule? If it does, then the second one generated MUST be accepted before the first. The reason I brought this up in the first place is that I believe you can argue both sides. Either 1) self directed events take precedence over all pending events including other pending self-directed events, or 2) multiple self-directed events take precedence over 'regular' events, but must obey the sender-receiver pair rule amongst themselves. There is a second rule that is also relevant. If there are multiple events between the same two instances, those events must be delivered in the order they were issued. If one can truly argue both sides, then I would argue this is the tie-breaker. I would argue more strongly that there is no obvious reason why this rule should not apply in the special case where the instances are the same. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Subtype vs. Supertype instances lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Hagstrom... > I think we'll have to agree to disagree on this point. Example: in OOSA:MWD, > section 6.1 discusses the supertype/subtype relationship. They repeatedly refer > to the "subtype object" and the "supertype object". They do not so much as hint > that the supertype object is any less an entity than the subtype object. > Personal conversations with Steve Mellor further lead me to adopt the view that > each is a real object. I guess we'll have to because I don't read 6.1 that way at all. The section starts off with only two distinct objects and it opines that it would be desirable to identify the common characteristics. It then proceeds to describe a notational artifact to do so. But that does not imply that there any more than the two original underlying types of entities. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Subtype vs. Supertype instances lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Hagstrom... > NF1 does not discuss keys. NF1 is characterized by a lack of repeating groups of > values (SM 1st rule) and a lack of internal structure within attributes (SM 2nd > rule). NF2 addresses compound identifiers: "every attribute that is not part of > the identifier represents a characteristic of the entire object, and not a > characteristic of something which would be identified by only a part of the > identifier" OOSA:MWD p. 44 (SM 3rd rule). NF3 prohibits attributes dependent on > a foreign key from residing in the object (SM 4th rule). None of these forms > prohibit dividing attributes between two objects with the same key--in fact, > there are higher normal forms which actually encourage such things. We seem to be reading different books. NF1 goes deeper than the ten-words-or-less characterization of NF1 as being about each member of a set of attributes being an elementary data item. Even in the limited space of my Dictionary of Computing, which starts with that characterization, there is a follow-up paragraph about the implications that leads to the following statement. "A logical consequence of the definition is that if the values of a particular attribute in a relation are necessarily distinct (e.g., if it is a key) then all other attributes of the relation [A -> B] are functionally dependent on it, and similarly for a set of attributes (e.g., a compound key)." The only database book I have handy (Data Base Management Systems by Tsichritizis and Lochovsky) also spends two pages of set theory coming to the same conclusion in the NF1 section. It also goes on to describe NF2 in terms of the transitive dependencies that I cited, concluding, "Transitive dependencies also lead to the insertion/deletion anomalies and consistency problems mentioned earlier." > In fact, OOSA:MWD has an example in which two objects have the same key. In > appendix A on p. 110 there is an OIM for a Magnetic Tape Management system. > There are two objects, TSAR and TAPE ROBOT, related by a simple 1:1 > relationship. The ID for TSAR is, not surprisingly, TSAR ID. The relationship is > formalized on the TAPE ROBOT side using the TSAR ID attribute. Here's the > surprise: TAPE ROBOT uses that same TSAR ID attribute as its ID. I think this is simply a matter of poor naming conventions. The identifier for TAPE ROBOT should have been TAPE ROBOT ID. It is the identifier for a TAPE ROBOT, not a TSAR. The fact that its value can be used as a referential attribute to TSAR reflects an entirely different piece of information in the problem space (i.e., a TAPE ROBOT lives its entire life cycle within a particular TSAR) that is not explicitly defined by the relationship (i.e., the relationship itself does not preclude reassigning the TAPE ROBOT to another TSAR during the execution). Generally identifier names should only be duplicated across objects when using compound identifiers. FWIW, I regard this technique for representing the relationship (i.e., making the identifier serve double duty as a relational attribute) as dangerously close to implementation pollution. It places unnecessary constraints on the translation by limiting the mechanisms that can be used for navigating the relationship and identifying TAPE ROBOT instances by tying them together. [An exception are the more general elements of compound keys. The architecture is going to have to deal with compound identifiers specially anyway and it makes the model easier to follow.] In this case I see no reason for TAPE ROBOT not having a separate TSAR ID referential attribute. In fact, the model seems to unnecessarily restrict the problem. If, in the future, one wanted to move TAPE ROBOTs around to different TSARs, more changes would be required to the model. Since the fact that they don't currently move around could have been documented in the relationship description, there is no compelling need for this model restriction. [One could argue that a relationship description is as much a part of the model as the (R) notation. I agree insofar as importance is concerned, but I see a distinction in that only the diagrams are guaranteed to be unambiguous. I prefer prefer to make the unambiguous description as general as possible to minimize maintenance.] Note that this model has some other problems. For example, the identifiers for TSAR TAPE LOCATION in the diagram are different than in the text description. Similarly, if Wall, Tier, and Rank are alternative identifiers for the supertype, they must be alternatives for the subtype. The editor must have been on holiday. B-) -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Synchronous vs. Asynchronous behavior lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Riemenschneider... > In OL:MWS, it seems to indicate, that in subtype migration cases, the > lifecycle EITHER exists in the supertype OR in the subtypes. To me, this > either/or seems to represent the real world case of a migratory subtype. Can > there be cases where migratory subtypes share a common portion of a lifecycle > in the supertype? A common situation in subtypes is that each subtype has different relationships with other objects. This implies that the create and delete states must be different to ensure consistency. One could conceive of situations where the rest of the behavior was the same. We have concrete migration situations for tester pins that play different roles in different tests. Those roles require different hardware initializations, etc. However, they are all have some common states such as Connected, Disconnected, and Ready To Load. We tend to use splicing to capture this sort of thing. > "Splicing" is covered for the non-migrating subtypes in > OL:MWS. Not allowing splicing in the migratory sub/supertype lifecycles > would prevent the aforementioned "polymorphic events" in the absence of a > subtype from happening. > Allowing splicing in migratory cases would be to say that you have events > directed at an instance that are not aware of it's present "condition". One potentially faces the problem of pending events any time one deletes an instance. It occurs to me that in this discussion we have been making an implicit assumption that it might be desirable to have a pending event carry over to the migrated subtype. I am not sure that is valid. In effect we are using the kill/create sequence to emulate a problem space notion -- that the same entity migrates between roles -- that has no construct in the notation. This is like applying a design pattern to the problem. Saying that we should allow carry-over events is essentially creating a requirement based upon the fact that we used a particular design pattern. I would argue that if one is going to kill an instance asynchronously, then one had better make sure there aren't any pending events. This does not depend upon migration. If there is some rare situation where one really, really wants to have carry-over events, then one still has a mechanism to do it: make the event polymorphic and use a synchronous kill/create. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: (SMU) Administrative Test "A. Keith Fligg" writes to shlaer-mellor-users: -------------------------------------------------------------------- Please ignore this mesasge. Thanks, - Keith ------------------------- Shlaer-Mellor Method ------------------------- A. Keith Fligg Tel : (520) 544-2881 ext. 18 Technical Staff Fax : (520) 544-2912 Project Technology, Inc. e-mail: keith@projtech.com 7400 N. Oracle Road, Suite 365 URL : http://www.projtech.com Tucson, AZ 85704-6342 -------------------------- Real-Time, On-Time -------------------------- Subject: (SMU) Administrative Test "A. Keith Fligg" writes to shlaer-mellor-users: -------------------------------------------------------------------- Please ignore this message. Thanks, - Keith ------------------------- Shlaer-Mellor Method ------------------------- A. Keith Fligg Tel : (520) 544-2881 ext. 18 Technical Staff Fax : (520) 544-2912 Project Technology, Inc. e-mail: keith@projtech.com 7400 N. Oracle Road, Suite 365 URL : http://www.projtech.com Tucson, AZ 85704-6342 -------------------------- Real-Time, On-Time -------------------------- Subject: Re: (SMU) Subtype vs. Supertype instances "Leslie A. Munday" writes to shlaer-mellor-users: -------------------------------------------------------------------- Oh goody! Mr. Lahman has send an email that I think I understand and think I can make useful comment on :-) I use the condept of duplicate identifiers quite a lot. In fact I (maybe mistakenly) thought that SM encouraged reuse of indentifiers, particularly when one has a 1-1 relationship. Whether you agree or not, there is one case where I think it necessary to share an identifier between objects. That is if one has a composition (am I allowed to say that on the ESMUG?) relationship. For example, If a car is composed of 4 wheels and one steering wheel, then the wheels take identifiers Car_ID and Wheel_ID, and the steering wheel has just Car_ID as an identifier. Something I find myself doing quite often, is breaking down an object which has become overly complicated into component parts in order to break up the object functionality. In this case I really have one object represented as two objects, each of which take identical Identifiers. So in the example quoted below, where TSAR and TAPE_ROBOT have the same identifier, that indicates to me that there is a 1-1 relationship between the two objects and that they are so closely tied that one should not move TAPE_ROBOTS around. I don't consider this implementation pollution, but useful additional information. Now in the example quoted, it may be incorrect to make this assumption (without the model in front of me I can't tell), but I can think of accassions when it is useful. Leslie. -- On Fri, 17 Dec 1999 09:44:45 lahman wrote: >lahman writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >Responding to Hagstrom... > >> In fact, OOSA:MWD has an example in which two objects have the same key. In >> appendix A on p. 110 there is an OIM for a Magnetic Tape Management system. >> There are two objects, TSAR and TAPE ROBOT, related by a simple 1:1 >> relationship. The ID for TSAR is, not surprisingly, TSAR ID. The relationship is >> formalized on the TAPE ROBOT side using the TSAR ID attribute. Here's the >> surprise: TAPE ROBOT uses that same TSAR ID attribute as its ID. > >I think this is simply a matter of poor naming conventions. The identifier for TAPE >ROBOT should have been TAPE ROBOT ID. It is the identifier for a TAPE ROBOT, not a >TSAR. The fact that its value can be used as a referential attribute to TSAR >reflects an entirely different piece of information in the problem space (i.e., a >TAPE ROBOT lives its entire life cycle within a particular TSAR) that is not >explicitly defined by the relationship (i.e., the relationship itself does not >preclude reassigning the TAPE ROBOT to another TSAR during the execution). >Generally identifier names should only be duplicated across objects when using >compound identifiers. > >FWIW, I regard this technique for representing the relationship (i.e., making the >identifier serve double duty as a relational attribute) as dangerously close to >implementation pollution. It places unnecessary constraints on the translation by >limiting the mechanisms that can be used for navigating the relationship and >identifying TAPE ROBOT instances by tying them together. [An exception are the more >general elements of compound keys. The architecture is going to have to deal with >compound identifiers specially anyway and it makes the model easier to follow.] > >In this case I see no reason for TAPE ROBOT not having a separate TSAR ID >referential attribute. In fact, the model seems to unnecessarily restrict the >problem. If, in the future, one wanted to move TAPE ROBOTs around to different >TSARs, more changes would be required to the model. Since the fact that they don't >currently move around could have been documented in the relationship description, >there is no compelling need for this model restriction. [One could argue that a >relationship description is as much a part of the model as the (R) notation. I >agree insofar as importance is concerned, but I see a distinction in that only the >diagrams are guaranteed to be unambiguous. I prefer prefer to make the unambiguous >description as general as possible to minimize maintenance.] > >Note that this model has some other problems. For example, the identifiers for TSAR >TAPE LOCATION in the diagram are different than in the text description. Similarly, >if Wall, Tier, and Rank are alternative identifiers for the supertype, they must be >alternatives for the subtype. The editor must have been on holiday. B-) > __________________________________________________________________ Get your own free England E-mail address at http://www.england.com Subject: Re: (SMU) Subtype vs. Supertype instances - in 'C' or 'C++'? Bob Lechner writes to shlaer-mellor-users: -------------------------------------------------------------------- FWIW, here is my take on this ongoing discussion of the supertype/subtype relationship: Perhaps the difference is because separate super and sub-class instances are the way that code must be generated for C but not for C++? BTWay I don't have a copy of OOSA/MWD [?], but this comment may apply equally well to any super-vs-sub-type separate instantiation decision. It might be interesting to see which S-M-like tools generate C or C/C++ code and which ones generate only C++. I have built the C-code-generating variety for educational uses, and am migrating them to generate C++. >From a schema.sch file representing the information model (Extended ER Diagram), I create AC (Active Class) and AI (Active Instance) supertypes with specialized (TImer/OPerator/application-specific) classes and instances as (simulated) subtypes. Each class also has a container for its multiple instances, and parent-child looping macros to navigate 1:M relationships. The main reason that AC and AI are provided is so that each AI can be linkd via its AC to the State Model interpreter (or code generator). in a non-polymorphic way, independent of the specific class whose behavior is defined by its state model. In C++, these separate AI instances would NOT be necessary. In moving these tool to the C++ target language, AC and AI would merge into a single class: the AC reduces to the static content of the class, and the AI becomes the instance-specific state data for the same class. Parent-child looping macros become iterators. THus, what were separate super and sub clsses now become a single class. Static data includes links to state model data and foreign-key links to support class and/or instance-specific associations. The latter define communicating paths: these are not only class-level Object [sic!] Communication Diagrams (OCD's) but also Instance-level Interconnection or Event path 'wiring' Diagrams. I find it annoying that I must call the instance-level constraints IID's or EPD's instead of OCD's, because S-M pre-empted OCD instead of CCD as the acronym for what is really a class=level inter-communication function :-(.) Bob Lechner Computer Science Dept. UMass-Lowell lechner@cs.uml.edu > > lahman writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Responding to Hagstrom... > > > I think we'll have to agree to disagree on this point. Example: in OOSA:MWD, > > section 6.1 discusses the supertype/subtype relationship. They repeatedly refer > > to the "subtype object" and the "supertype object". They do not so much as hint > > that the supertype object is any less an entity than the subtype object. > > Personal conversations with Steve Mellor further lead me to adopt the view that > > each is a real object. > > I guess we'll have to because I don't read 6.1 that way at all. The section starts > off with only two distinct objects and it opines that it would be desirable to > identify the common characteristics. It then proceeds to describe a notational > artifact to do so. But that does not imply that there any more than the two > original underlying types of entities. > > -- > H. S. Lahman There is nothing wrong with me that > Teradyne/ATD could not be cured by a capful of Drano > MS NR22-11 > 600 Riverpark Dr. > N. Reading, MA 01864 > (Tel) (978)-370-1842 > (Fax) (978)-370-1100 > lahman@atb.teradyne.com > > Subject: RE: (SMU) Subtype vs. Supertype instances - in 'C' or 'C++'? David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Bob Lechner writes to shlaer-mellor-users: > Perhaps the difference is because separate super and > sub-class instances are the way that code must be > generated for C but not for C++? I could argue this the other way round. (But please realise that I can play the game both ways: my point is that you can't make these sweeping generalizations about architecture. You can always find a different way to do something.) In C, you can store the subtype data in a union within a supertype struct. Migration is a simple matter of changing the tag value (and possibly initializing some values). In C++ unions are generally a bad idea (because you lose ctors), so you'd tend to use separate classes. Whilst you could use inheritance to link these classes, to do so causes problems for migration. Its usually better to use something like the state-pattern when migration is likely. So C can use a single struct, and C++ can use separate classes. Dave. -- Dave Whipp, Senior Verification Engineer Infineon Technologies Corp., San Jose, CA 95112 mailto:David.Whipp@infineon.com tel. (408) 501 6695 Opinions are my own. Factual statements may be in error. Subject: Re: (SMU) Subtype vs. Supertype instances lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- REsponding to Munday... > Whether you agree or not, there is one case where I think it necessary to share an identifier between objects. That is if one has a composition (am I allowed to say that on the ESMUG?) relationship. > > For example, If a car is composed of 4 wheels and one steering wheel, then the wheels take identifiers Car_ID and Wheel_ID, and the steering wheel has just Car_ID as an identifier. I have no problem with this. I specifically noted this exception in my second paragraph. We routinely use compound identifiers to ensure consistency of navigation in relationship loops. In that situation it is actually preferable to share the identifier names. > Something I find myself doing quite often, is breaking down an object which has become overly complicated into component parts in order to break up the object functionality. > > In this case I really have one object represented as two objects, each of which take identical Identifiers. > > So in the example quoted below, where TSAR and TAPE_ROBOT have the same identifier, that indicates to me that there is a 1-1 relationship between the two objects and that they are so closely tied that one should not move TAPE_ROBOTS around. This one I don't buy because neither identifier is compound. So I don't like duplicated names for the reasons I gave originally. > I don't consider this implementation pollution, but useful additional information. I said 'dangerously close' as I recall. B-) The useful information is that a TAPE ROBOT exists only in a single TSAR throughout its life cycle. If this is recorded in the relationship description, it is still captured in the model without affecting the unambiguous static description in the IM diagram. I believe the translation would still Do The Right Thing in this case. My argument is around the nature of the changes one would have to make if one changed one's mind and allowed TAPE ROBOTs to wander about more. Admittedly this is mostly an aesthetic consideration because the changes are very nearly identical... Original way: Change an action to instantiate the relationship when TAPE ROBOT wanders. Change the IM diagram. My way: Change an action to instantiate the relationships when TAPE ROBOT wanders. Change the relationship description to remove the limitation ...and I can't come up with an example where this would be significant. But I know good art when I see it. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Subtype vs. Supertype instances - in 'C' or 'C++'? lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- REsponding to Lechner... > Perhaps the difference is because separate super and sub-class instances > are the way that code must be generated for C but not for C++? > BTWay I don't have a copy of OOSA/MWD [?], but this comment may apply > equally well to any super-vs-sub-type separate instantiation decision. > > It might be interesting to see which S-M-like tools generate C or C/C++ > code and which ones generate only C++. This is a good point that I never connected with while bewailing that our tool vendor creates separate instances. The vendor's architecture that we use is a straight C architecture! Some decades things just don't play out well. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Subtype vs. Supertype instances - in 'C' or 'C++'? Erick.Hagstrom@envoy.com writes to shlaer-mellor-users: -------------------------------------------------------------------- OOSA:MWD was meant to refer to Object Oriented Systems Analysis: Modeling the World in Data, Sally Shlaer and Steve Mellor, 1989(?). It describes the Object Information Model in some detail, but doesn't develop the method further. Their later book, Object Lifecycles: Modeling the World in States (sometimes refered to as OL:MWS) takes the method through State Models into Action Data Flow Diagrams, discusses Domains, and offers some design material as well. Sorry for the vague reference. Bob Lechner on 12/19/99 02:01:49 PM Please respond to shlaer-mellor-users@projtech.com To: shlaer-mellor-users@projtech.com cc: lechner@jupiter.cs.uml.edu (Bob Lechner) (bcc: Erick Hagstrom/Nashville/Envoy) Subject: Re: (SMU) Subtype vs. Supertype instances - in 'C' or 'C++'? Bob Lechner writes to shlaer-mellor-users: -------------------------------------------------------------------- BTWay I don't have a copy of OOSA/MWD [?], but this comment may apply equally well to any super-vs-sub-type separate instantiation decision. Subject: Re: (SMU) Subtype vs. Supertype instances Erick.Hagstrom@envoy.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Mr. Lahman, you've sent me back to the library. I remain unconvinced and seek bigger guns and better ammunition. ;-) Stay tuned... Subject: Re: (SMU) Subtype vs. Supertype instances - in 'C' or 'C++'? lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding toWhipp... > In C, you can store the subtype data in a union within a > supertype struct. Migration is a simple matter of changing > the tag value (and possibly initializing some values). I assume you meant embedding a supertype struct in the subtype rather than a union. It appears that I was in the throes of severe substance abuse with my reply to Lechner yesterday -- especially when I recall that we actually did this several years ago in a manual C architecture (which happened to use subtype migration). -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Subtype vs. Supertype instances - in 'C' or 'C++'? Bob Lechner writes to shlaer-mellor-users: -------------------------------------------------------------------- A union of subtypes makes sense to me - it seems more 'traditional'. FWIW, I don't believe in subtype migration for the purpose of changing an object's life-cycle state; should we really hide the 'real' action routine's name behind an anonyumous Active Class method call? Then the state name has the burden of suggesting its immediate behavior as well as the object's past history. OTOH, subtyping for the purpose of specializing a generic Active Class to any concrete class is a great idea. It is static rather than dynamic. Bob Lechner CS Dept. UMass-Lowell lechner@cs.uml.edu > > lahman writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Responding toWhipp... > > > In C, you can store the subtype data in a union within a > > supertype struct. Migration is a simple matter of changing > > the tag value (and possibly initializing some values). > > I assume you meant embedding a supertype struct in the subtype rather > than a union. > > It appears that I was in the throes of severe substance abuse with my > reply to Lechner yesterday -- especially when I recall that we actually > did this several years ago in a manual C architecture (which happened to > use subtype migration). > > -- > H. S. Lahman There is nothing wrong with me that > Teradyne/ATD could not be cured by a capful of Drano > MS NR22-11 > 600 Riverpark Dr. > N. Reading, MA 01864 > (Tel) (978)-370-1842 > (Fax) (978)-370-1100 > lahman@atb.teradyne.com > > Subject: RE: (SMU) Subtype vs. Supertype instances - in 'C' or 'C++'? David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- responding to Lahman > > In C, you can store the subtype data in a union within a > > supertype struct. Migration is a simple matter of changing > > the tag value (and possibly initializing some values). > > I assume you meant embedding a supertype struct in the > subtype rather than a union. You can do it either way. The advantage of embedding a "union subtypes" in the "struct supertype" is that migration is trivial. If you embed a "struct supertype" within each "struct subtype" then you have to delete/ recreate the supertype every time you migrate. Dave Subject: (SMU) Polymorphic Events "Stephen Guercio" writes to shlaer-mellor-users: -------------------------------------------------------------------- Hi everyone, I'm looking into using polymorphic events in a model I'm working on. OOA96 describes them pretty well, but I'm in a quandry as to how to document them. The paper describes a correlation table to be created by the analysts that relates the polymorphic events with their "real" counterparts in the associated sub-types. I'd prefer to document this correlation in the models themselves somehow, but can't see a way to do it. I guess I'm just lazy and don't want to make a separate table outside of my models! Any experiences or suggestions? -- Steve Guercio Motorola Wireless Subscriber Division Semiconductor Products Sector 20250 Century Blvd. Germantown, MD 21770 Subject: RE: (SMU) Polymorphic Events "Peter J. Fontana" writes to shlaer-mellor-users: -------------------------------------------------------------------- > "Stephen Guercio" writes to > shlaer-mellor-users: > -------------------------------------------------------------------- > I'm looking into using polymorphic events in a model I'm working on. > OOA96 describes them pretty well, but I'm in a quandry as to how to > document them. The paper describes a correlation table to be created by > the analysts that relates the polymorphic events with their "real" > counterparts in the associated sub-types. Hi Steve - here at Pathfinder we felt that there was no need to model additional subtype events. We simply show the supertype events on the subtype models as appropriate. The use of Object Keyletters (Class Prefixes for us UML types) in the event label clearly shows readers of the subtype state model that these are polymorphic events. _______________________________________________________ Pathfinder Solutions Inc. www.pathfindersol.com | 888-OOA-PATH | | effective solutions for software engineering challenges| | Peter Fontana voice: +01 508-384-1392 | peterf@pathfindersol.com fax: +01 508-384-7906 | _______________________________________________________| Subject: Re: (SMU) Subtype vs. Supertype instances - in 'C' or 'C++'? lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > > I assume you meant embedding a supertype struct in the > > subtype rather than a union. > > You can do it either way. The advantage of embedding a > "union subtypes" in the "struct supertype" is that > migration is trivial. If you embed a "struct supertype" > within each "struct subtype" then you have to delete/ > recreate the supertype every time you migrate. But there is a cost associated with the union. If someone gets hold of the wrong subtype there is no way to identify the problem unless you have a run-time ASSERT on the subtype type in each place where a subtype is accessed. If one embeds in each subtype, an invalid access can be identified at compile time. Also, you don't need to reallocate the memory for migration when using embedding. You could do what you want with something along the lines of subtypeX* subtypeY::migrate_to_X (void) { // update unique relationships, then... this.type = X_TYPE; return (subtypeX*) this; } The subtypeX* would still provide the compile time type safety for accessing the correct subtype data. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Polymorphic Events lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Guercio... > I'd prefer to document this correlation in the models themselves > somehow, but can't see a way to do it. I guess I'm just lazy and don't > want to make a separate table outside of my models! If your CASE tools supports polymorphic events, there should be a mechanism for defining the table in the tool. Fontana's solution works -- provided your model tool allows enough freedom in the event naming conventions. Also, if you have an OTS code generator other than System Architect I would run an experiment to make sure this works properly without confusing the code generator. It may try to look up the event in the wrong STT and become ill. [Catch-22: this is probably not a problem because if you had an OTS code generator that supported polymorphic events at all, it would have a way of specifying the table in the OOA.] Lacking that you can at least use the individual event descriptions to document where they go. That gets messy if you do it literally because you would have to enumerate each event in each subtype mapped to the polymorphic event. I would regard it as fair to have the event description point to the external table. That effectively draws the external table into the model, albeit with less convenience. There is potentially a configuration management problem in keeping the contents of versions in synch when using an external table. However, one usually has this problem anyway. For example, most of the requirements that affect only the translation are always outside the OOA tool and these have to be kept in synch with the OOA as well. So I don't see anything unusual about having the table outside the models that one doesn't have to deal with anyway. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: Re: (SMU) Polymorphic Events baryh@airmail.net (Bary Hogan) writes to shlaer-mellor-users: -------------------------------------------------------------------- > > "Stephen Guercio" writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Hi everyone, > > I'm looking into using polymorphic events in a model I'm working on. > OOA96 describes them pretty well, but I'm in a quandry as to how to > document them. The paper describes a correlation table to be created by > the analysts that relates the polymorphic events with their "real" > counterparts in the associated sub-types. > > I'd prefer to document this correlation in the models themselves > somehow, but can't see a way to do it. I guess I'm just lazy and don't > want to make a separate table outside of my models! > > Any experiences or suggestions? > I agree with Fontana. You can eliminate the correlation table entirely and simply receive events in the subtype that are directed to the supertype. [I don't know of any reason in the method why the correlation table is important. Maybe someone else has an idea.] If you are using a specific SM OOA tool, then it probably has some method of dealing with this. The one we use has no correlation table. When I create an event which is directed to the supertype, it automatically becomes available to all the subtypes. Bary Hogan Lockheed Martin Tactical Aircraft Systems Fort Worth, TX bary.d.hogan@lmco.com (817) 763-2620 Subject: Re: (SMU) Polymorphic Events "Dana Simonson" writes to shlaer-mellor-users: -------------------------------------------------------------------- We added an extra diagram. On the diagram, the poly receiver shape shows inflows for each event accepted. The outflows to the poly destination shape show the mappings (A12=B5,A7=B7). The poly receiver also shows what attribute specifies the type, and each receiver shows the value of this type attribute. (This allows the analysis and architecture to both know what the type attribute is and what the specific values are for individual subtypes.) This shows on one chart all of the polymorphic events, what subtypes can receive them, the mappings between supertype and subtype event labels, and the values of the attribute that specifies the type. I do agree with Peter Fontana however, that if you put the supertype events on the subtype state diagrams, that this would be unnecessary. (I would then like to be able to specify the value of the type attribute, for each subtype, on the information model.) Advice: Unless you want to write your own case tool, buy one and use whatever mechanism it provides. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Dana Simonson Section Manager, Engineering E. F. Johnson Company Waseca Operations Center 299 Johnson Ave SW Waseca, MN 56093-0514 dsimonson@efjohnson.com www.efjohnson.com >>> "Stephen Guercio" 12/21 2:44 PM >>> "Stephen Guercio" writes to shlaer-mellor-users: -------------------------------------------------------------------- Hi everyone, I'm looking into using polymorphic events in a model I'm working on. OOA96 describes them pretty well, but I'm in a quandry as to how to document them. The paper describes a correlation table to be created by the analysts that relates the polymorphic events with their "real" counterparts in the associated sub-types. I'd prefer to document this correlation in the models themselves somehow, but can't see a way to do it. I guess I'm just lazy and don't want to make a separate table outside of my models! Any experiences or suggestions? -- Steve Guercio Motorola Wireless Subscriber Division Semiconductor Products Sector 20250 Century Blvd. Germantown, MD 21770 Subject: Re: (SMU) Polymorphic Events "Stephen Guercio" writes to shlaer-mellor-users: -------------------------------------------------------------------- First, thanks everyone for you ideas. With all this mention of CASE tools, I was just wondering: Is it considered in poor taste in this forum to mention what tools we're using by name? The tools I'm using is UML-based, without direct Shlaer-Mellor support, so I'll probably go with Pete Fontana's idea. Dana Simonson wrote: > "Dana Simonson" writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > We added an extra diagram. On the diagram, the poly receiver shape shows inflows for each event accepted. The outflows to the poly destination shape show the mappings (A12=B5,A7=B7). The poly receiver also shows what attribute specifies the type, and each receiver shows the value of this type attribute. (This allows the analysis and architecture to both know what the type attribute is and what the specific values are for individual subtypes.) > > This shows on one chart all of the polymorphic events, what subtypes can receive them, the mappings between supertype and subtype event labels, and the values of the attribute that specifies the type. > > I do agree with Peter Fontana however, that if you put the supertype events on the subtype state diagrams, that this would be unnecessary. (I would then like to be able to specify the value of the type attribute, for each subtype, on the information model.) > > Advice: Unless you want to write your own case tool, buy one and use whatever mechanism it provides. > > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > Dana Simonson > Section Manager, Engineering > E. F. Johnson Company > Waseca Operations Center > 299 Johnson Ave SW > Waseca, MN 56093-0514 > dsimonson@efjohnson.com > www.efjohnson.com > > >>> "Stephen Guercio" 12/21 2:44 PM >>> > "Stephen Guercio" writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Hi everyone, > > I'm looking into using polymorphic events in a model I'm working on. > OOA96 describes them pretty well, but I'm in a quandry as to how to > document them. The paper describes a correlation table to be created by > the analysts that relates the polymorphic events with their "real" > counterparts in the associated sub-types. > > I'd prefer to document this correlation in the models themselves > somehow, but can't see a way to do it. I guess I'm just lazy and don't > want to make a separate table outside of my models! > > Any experiences or suggestions? > > -- > Steve Guercio > Motorola > Wireless Subscriber Division > Semiconductor Products Sector > 20250 Century Blvd. > Germantown, MD 21770 -- Steve Guercio Motorola Wireless Subscriber Division Semiconductor Products Sector 20250 Century Blvd. Germantown, MD 21770 Subject: Re: (SMU) Polymorphic Events "Dana Simonson" writes to shlaer-mellor-users: -------------------------------------------------------------------- >From the monthly PT posting..... "Industry consultants and tool vendors are welcome and encouraged to answer questions. Product mentions by tool vendors and blatent advertisements by consultants are prohibited." Technically, if you are not a tool vendor you are not prohibited from mentioning tools, but I think it would be in poor taste to use the PT mailing list to discuss the advantages of competing products. We rolled our own since we didn't like what was available. Personally, I would like to see a site somewhere that contained links to the makers of all available SM tools. This would be benificial to the method, but since it would not benefit any particular vendor, most would be unwilling to host such a page. If anyone knows of one, I'd like to get the address. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Dana Simonson Section Manager, Engineering E. F. Johnson Company Waseca Operations Center 299 Johnson Ave SW Waseca, MN 56093-0514 dsimonson@efjohnson.com www.efjohnson.com >>> "Stephen Guercio" 12/22 10:13 AM >>> "Stephen Guercio" writes to shlaer-mellor-users: -------------------------------------------------------------------- First, thanks everyone for you ideas. With all this mention of CASE tools, I was just wondering: Is it considered in poor taste in this forum to mention what tools we're using by name? The tools I'm using is UML-based, without direct Shlaer-Mellor support, so I'll probably go with Pete Fontana's idea. Subject: RE: (SMU) Polymorphic Events "Peter J. Fontana" writes to shlaer-mellor-users: -------------------------------------------------------------------- > "Stephen Guercio" writes to > shlaer-mellor-users: > -------------------------------------------------------------------- > With all this mention of CASE tools, I was just wondering: Is it > considered in poor taste in this forum to mention what tools > we're using by name? I think you can talk about case tools as long as you're not a vendor. Subject: Re: (SMU) Subtype vs. Supertype instances - in 'C' or 'C++'? lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Dumbo... I knew there were reasons why I avoided code snippets in Emails. > Also, you don't need to reallocate the memory for migration when using > embedding. You could do what you want with something along the lines of > > subtypeX* subtypeY::migrate_to_X (void) This would have made the point about C coding a bit better if it had actually been written in C. typedef struct { int type; /* other variables common to all subtypes */ } SUPER_TYPE; typedef struct { SUPER_TYPE s; /* variables explicit to X */ } SUB_X; typdef struct { SUPER_TYPE s; /* variables explicit to Y */ } SUB_Y; SUB_X* migrate_Y_to_X (SUB_Y* y_ptr) { ASSERT (y_ptr -> s.type == TYPE_Y); /* clean up relationships */ y_ptr -> s.type = TYPE_X; return (SUB_X*) y_ptr; } -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano MS NR22-11 600 Riverpark Dr. N. Reading, MA 01864 (Tel) (978)-370-1842 (Fax) (978)-370-1100 lahman@atb.teradyne.com Subject: RE: (SMU) Subtype vs. Supertype instances - in 'C' or 'C++'? David.Whipp@infineon.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Lahman wrote: > typedef struct { > int type; > /* other variables common to all subtypes */ > } SUPER_TYPE; > > typedef struct { > SUPER_TYPE s; > /* variables explicit to X */ > } SUB_X; > > typdef struct { > SUPER_TYPE s; > /* variables explicit to Y */ > } SUB_Y; > > SUB_X* migrate_Y_to_X (SUB_Y* y_ptr) > { > ASSERT (y_ptr -> s.type == TYPE_Y); > /* clean up relationships */ > y_ptr -> s.type = TYPE_X; > return (SUB_X*) y_ptr; > } I regret to inform you that your entry to the obfuscated C contest did not reach the final round. Your use of meaningful identifiers did not impress the judges. However, they did like a number of features in your code. You are obviously aware that the target of obfusction is the maintenance programmer, and thus your diabolical booby-traps are right on target. ;-) Most blatent is the memory violations that would occur (unless the subtype structures are all the same size) unless you initially allocate memory sized for the biggest. A consequence of this is that your objects can't be auto-variables on the stack Your use of C's guaranteed ordering of struct elements is delightful. In ignorance, a maintainer could decide to create a linked list of SUB_X, and so adds a *next pointer to the start of the struct. You immediately have a nasty bug that could lie hidden for some time. Finally, the judges were impressed by the smoke-and- mirrors employed to fool the maintainer into beleiving the code is type-safe with the migration funciton. A bug could be introduced whereby an old pointer is not migrated (the function returns a new pointer: it doesn't change all existing pointers). Programmers won't insert run-time type checks for this because you are using static type safety. So an incorrectly typed pointer could be used without ever being spotted. Again, this could be a nasty bug that lies hidden. None of these problems would occur with automatic code generation. But your arguments against unions were based on the lack of such automation, so I feel free to point out the problems in your solution under the same assumption. The "correct" way to do this in C is the union: struct sub_x { ...; }; struct sub_y { ...; }; struct super { id_t id; type_t an_attribute; role_t tag; union { struct sub_x x; struct sub_y y; } subtype; }; Migration is a simple matter of adjusting the tag. You are guaranteed that the supertype structure is always big enough to hold its subtypes. Additional members can be added to the super- and sub- types without worrying about subtle data-displacement bugs. And, because you don't make any static type guarantees, programmers shouldn't rely on them. Dave.