'archive.9801' -- Subject: Re: (SMU) SMALL: link/unlink David.Whipp@gpsemi.com (Dave Whipp) writes to shlaer-mellor-users: -------------------------------------------------------------------- Greg Wiley wrote: > The method does not preclude us from defining > new accessor forms in addition to those listed > in OOA96. What if we think of link and unlink > as accessor forms? If link/unlink were non-standard (or even standard!) extentions then their restrictions wouldn't matter. The fact that they have problems when applied to relationships which share their formalising attributes with another wouldn't matter because those relationships could still be managed with the more general method of writing to referential attributes. SMALL makes two restrictions that prevent this scheme - both unnecessary IMHO. These are: you can't read/write referential attributes; and you can't pass arbitrary-valued quantities on dataflows. OTOH, SMALL does provide a feature that enables the use of link/unlink: the use of references in place of dataflows. If SMALL did not include references then it would be tricky to add them as an extention. If you wish to maintain architectural simplicity then architects and analysts can agree restrictions. For example, banning non-arbitary primary identifiers. Dave. Not speaking for GPS. Subject: (SMU) SMALL: Event Generation David.Whipp@gpsemi.com (Dave Whipp) writes to shlaer-mellor-users: -------------------------------------------------------------------- You send an event in SMALL by piping in a reference and then specifying supplemental data as parameters: either data-variables or literals. I agree with the people who have said that this partition is questionable. Why not pipe it all in and then extract the appropriate bits: X(one).(id, value) | gen Y1: Abc (id; value=>data); The alternative is: X(one) > my_X; my_X.value > ~data; my_X | gen Y1: Abc (~data); It comes down to aesthetics, but I prefer the former version. Who was it that said "good software is 90% aesthetics"?. However, If we accept the definition in SMALL as it currently stands; then the following comments come to mind: Data-variables must be write-once within an action. This is not specified anywhere in the paper. They should also carry with them the semantics of sequential constraint. If not, then guards may be needed to ensure that the correct supplemental data is passed on the event. There are four kinds of event in Shlaer-Mellor: 'normal', creation, assigner and multiple-assigner. Is each of these supported by SMALL? Creation events and assigner events are not associated with any instance. Therefore it would not make sense to pipe in a reference. Neither can supplemental data be piped into the event. Therefore these kinds of event generator must always appear by themselves in a statement (possibly guarded). I tend to find this a bit ugly, but usable. Finally: multiple-asigner events. Are these equivalent to 'normal' events? At a first glance, it appears that they are. Multiple- Assigners are associated with an instance that is part of a loop of dependent relationships. A reference to that instance should be piped into the event generator. But, there is a bit of scope for complication. What if the thing used to partition the relationship is not an object? (This possibility might be ruled out the the method - I'm not sure). For the sake of a contrived example: consider the Customer-clerk example in a rather politically incorrect shop. Suppose we partition the relationship on the basis of the sex of the customer. There will be a set of clerks who serve male customers and a non-overlapping set who serve female customers. The equivalence class is based on a two-valued enumeration that is not the identifer of any object. It therefore cannot be a reference and therefore cannot be piped into an event generator. But does the following make sense?: c_id | Customer().gender | R3-A1: CustomerWaiting (gender;); Dave. Not speaking for GPS. -- Dave Whipp, Embedded Systems Group GEC Plessey Semiconductors, Plymouth, United Kingdom. PL6 7BQ. tel. +44 (0)1752 693277 mailto:david.whipp@gpsemi.com fax. +44 (0)1752 693306 http://www.gpsemi.com Subject: Re: (SMU) SMALL: Dataflows are Dataflows smf@cix.compulink.co.uk (Mike Finn) writes to shlaer-mellor-users: -------------------------------------------------------------------- Hi, Whipp describes a model that highlights a problem with the link operator: > Consider a model with 3 objects (A, B and C), with relationships > r1 and r2 between A/B and B/C respectively. For this demonstration, > I will define a referential attribute in B to formalise both > relationships (for simplicity, there are no compound identifiers). The model demonstrates that R1 and R2 are always "linked" together. This is clear in an ADFD since changing the referential attribute in B is an atomic operation. However, link is specified in terms of a relationship, but must effect the referential attribute. When "(refB, refA2) | link R1" is executed, it also seems to execute "(refB, refC2) | link R2". Furthermore, I may write "(refB, refA2) | link R1" then "(refB, refC1) | link R2", not necessarily considering that the latter statement has undone the former. The point is that using the link operator gives the impression of effecting one relationship only, not two as in this case. To overcome this side effect, choose one from: 1) Discard the link and unlink operators. 2) Decide Whipp's model has no meaning in the Real World and thus is an irrelevant construct. I can't find any example like this in the texts. Nor can I think of one. 3) Change OOA to disallow one referential attribute from formalising two or more relationships by banning non-arbitrary identifiers. BTW, Happy New Year! -- _ Regards, _ / \ _ Mike / \_/|_|\_/ \ \ \ #[_]# / / / [_]#####[_]\ Mike Finn \ \#########_/ Dark Matter | Email: smf@cix.compulink.co.uk <_>#######[_]\ Systems Ltd | Voice: +44 (0) 1483 755145 ##### \_/ Subject: Re: (SMU) SMALL: Dataflows are Dataflows Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- Mike Finn wrote: > 2) Decide Whipp's model has no meaning in the Real World and > thus is an irrelevant construct. I can't find any example > like this in the texts. Nor can I think of one. Try Fig 3.8 on page 15 of the OOA96 Report. You will also find many such formalisations in complex subtype structures; though these don't have the same problems with link/unlink. (But there may some analagous problems with the migration operator in these complex structures - what happens when you migrate into an object whose identifier forms a relationship with another object?). I think I'm going to have think about this - I'm getting confused trying to work out what does, and doesn't, happen implicitly with the relationship manipulations. Dave. -- Dave Whipp, Embedded Systems Group GEC Plessey Semiconductors, Plymouth, United Kingdom. PL6 7BQ. tel. +44 (0)1752 693277 mailto:david.whipp@gpsemi.com fax. +44 (0)1752 693306 http://www.gpsemi.com Subject: Re: (SMU) SMALL: Dataflows are Dataflows lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > You have agreed that relationships are completely defined by the > values of referential attributes. If you know the value of the > referential attribute(s) then you know the relationship. > Furthermore, I think you also agreed that, from the perspective > of the OOA (i.e. in a simulation) the only information that is > available to define the relationship is its formalising > attributes. > > Consider a model with 3 objects (A, B and C), with relationships > r1 and r2 between A/B and B/C respectively. For this demonstration, > I will define a referential attribute in B to formalise both > relationships (for simplicity, there are no compound identifiers). For the benefit of lurkers who might get the wrong idea, this is not legal. If there are no compound identifiers, then this says that instances of A and C have the same identifier, which is a no-no. At least one of them would have to have a compound identifier. But this is a quibble that is not relevant to the argument. > Suppose there are two instances of A; two instances of C; and one > instance of B. Let the identifiers of the instances of both A and > C be "1" and "2". The value of the referential attribute in B > will before be either "1" or "2". > > (You may want to draw an instance table for each object) > > Hopefully you agree that, if I was able to change the value of > the referential attribute in B from "1" to "2" then both > relationships would be effected. However: in SMALL, I cannot do > that. I must use two link processes to link the two relationships. To be consistent, I think both relationships need to be deleted (unlinked) before redefining them. I submit this is true even if one is just writing to relational identifiers. One should write NOT PARTICIPATING to the relational identifiers prior to changing the value. This is necessary so that the architecture can Do the Right Thing if it happens to be maintaining other data structures like sorted lists for the relationship. I argue that being forced to use unlink makes this (the deactivation of the existing relationship) much clearer in the models. (In tools that use link/unlink, an error will usually be generated in simulation if you attempt to double up on a relationship by omitting the unlink.) It also makes doing an architecture much simpler because the deactivation and activation of relationships are separated. > Now, suppose I run a simulation where I single step the DFD. > One of the processes will be executed before the other (if > there are no sequential constraints then there are no > restrictions on the order). > > Initially, the value of the referential attribute is "1": my > B is linked to instance "1" of A and instance "1" of C. I will > use accessor processes to get references to instances "2" of A > and C. > > I now execute the "(refB, refA2) | link R1" statement. In your > post, you agreed that relationships are defined by the value > of referential attributes; and that the link statement will, > from the perspective of OOA (not the implementation), set the > value of the referential attribute. So the value of the > referential attribute is now "2". > > Given that relationships are defined by their formalising > attributes; then, if the value of the referential attribute > is "2", which instance of C is currently related to B via R2? > Does the subsequent "(refB, refC2) | link R2" do anything? > Would I get different simulation results if I omitted it? > > You either break the referential attributes (if the relationship > is not defined by the value); or you are requiring the "link R1" > process to link R2 as a side effect (if the relationship is > defined by the value). I see two issues here. First, the old relationships need to be removed prior to establishing new ones. If the IM says that if both relationships share an ID, then if one of the relationships is changed, then they both must be changed. That is, you can't simply reassign one of them -- if one is removed, they both have to be removed. Using a shorthand to directly write a new value to the referential attribute without an intervening NOT PARTICIPATING overloads the write process by forcing it to both deactivate and activate relationships. I don't like this on principle. If you use link/unlink, then you must unlink both prior to doing any new links, making things explicit for the translation. The second issue is when consistency must pertain. At the end of the action the world should be consistent (or getting there in some deterministic fashion). You seem to be arguing that relationship consistency must be maintained between processes in an action. This is not generally possible. If I unlink both and then link both, everything will be consistent at the end of the action, which is all anyone can ask. Thus I see no substantive difference between writing NOT PARTICIPATING followed by writing a new identifier and doing two unlinks followed by two links. In either case both techniques result in relational integrity at the end of the action. The link/unlink is more verbose, but it has the value of being quite consistent in its application. That is, it works exactly the same way whether the identifiers are shared or not. I think this clarity and consistency comes into play when one uses compound identifiers. Typically some elements of the compound identifier will be shared and others won't be. This embroils the analyst in lower level issues about which identifiers need to be written and which don't. By using link/unlink one raises the issue to the level of relationships (i.e., which ones are active). The analyst can look at the IM to verify that all the ducks are in a row (i.e., to determine if another unlink/link pair is required). If the analyst screws up, then the simulator can easily diagnose the problem. After being initially wishy-washy about link/unlink, I find that I am moving steadily further into the link/unlink camp. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) SMALL: Dataflows are Dataflows lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Finn... > The model demonstrates that R1 and R2 are always "linked" together. > This is clear in an ADFD since changing the referential attribute in > B is an atomic operation. However, link is specified in terms of a > relationship, but must effect the referential attribute. > > When "(refB, refA2) | link R1" is executed, it also seems to execute > "(refB, refC2) | link R2". > > Furthermore, I may write "(refB, refA2) | link R1" then > "(refB, refC1) | link R2", not necessarily considering that the > latter statement has undone the former. See my response to Whipp for more detail. The crucial issue here is that the unlinks are missing. If the existing relationships are removed prior to linking the new ones, then the links have the independence you seek.P.S. I haven't figured out what your signoff pseudo graphic is; it loses something in translation when one has a proportional font by default. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) SMALL: Event Generation lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > Data-variables must be write-once within an action. This is not > specified anywhere in the paper. They should also carry with them > the semantics of sequential constraint. If not, then guards may be > needed to ensure that the correct supplemental data is passed on > the event. I am not sure I follow this one. I can agree within the thread sequence represented by the piped statement. But within the whole action?? For separate statements, I don't see anything different about the problem than one would have anytime one has multiple reads and writes in an action -- it is up to the analyst to specify the process sequence so that reads get the proper values. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) SMALL: link/unlink lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Wiley... > The method does not preclude us from defining > new accessor forms in addition to those listed > in OOA96. What if we think of link and unlink > as accessor forms? Does that change the > discussion? Am I just stating the obvious? > I just assumed that they would be specialized processes. That is, if one wanted to do so I would think they could be back-filled easily into the ADFD by adding them to the OOA96 list of processes -- perhaps with another specialized process that obtained an instance reference given identifiers. There would have to be an attendant definition of their responsibilities and the rules for using them. Given that, they would simply be a well-defined architectural process like an event generator. I think they are justified in being special because their implementation is highly restricted. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) SMALL: link/unlink lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Fontana... > A second issue with link&unlink/vs/Write is analysis consistency: in > the > absence of the link brothers, Write is a very straightforward > operation, > with no opportunity to shoot yourself in the foot (regarding > formalization). > However, when link and unlink are separated out, the use of Write > requires > care - the analyst must manually ensure the appropriate links and > unlinks > are called. I see it as just the opposite. In Whipp's example the write is overloaded with functionality because it has to do whatever is necessary to remove the old relationship as well as what needs to be done to instantiate a new one. This seems to be an invitation to foot shooting -- much like using NOT PARTICPATING for conditional relationships that share an identifier. By decoupling the activation and deactivation through link/unlink it seems to me that we have a more straight forward means of serving the problem space. I also think that link/unlink are more straight forward because they remove the duality of Write accessors. If you have link/unlink, then a Write accessor is merely that -- it just writes values to a data store. Without link/unlink theWrite accessor is something very different when the target happens to be a relational identifier. Finally, I think link/unlink are more straight forward because they remove the need for the analyst to worry about which particular identifiers need to be written. The level of abstraction for the analyst is raised to the relationship level where it should be, rather than the individual components of compound identifers. BTW, the spec prohibits writing to relational identifiers, presumeably to make sure that the right hand always knows what the left hand is doing, so the last issue should not be a problem. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) SMALL: Dataflows are Dataflows lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lahman... When one starts talking to oneself, it is time for a vaction... > For the benefit of lurkers who might get the wrong idea, this is not > legal. If there are no compound identifiers, then this says that > instances of A and C have the same identifier, which is a no-no. At > least one of them would have to have a compound identifier. But this > is > a quibble that is not relevant to the argument. Rather imprecise. The situation would be legal if A and C were subtypes of the same supertype. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) SMALL: Event Generation Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > Responding to Whipp... > > Data-variables must be write-once within an action. This is not > > specified anywhere in the paper. They should also carry with them > > the semantics of sequential constraint. If not, then guards may be > > needed to ensure that the correct supplemental data is passed on > > the event. > > I am not sure I follow this one. I can agree within the thread > sequence represented by the piped statement. But within the > whole action?? For separate statements, I don't see anything > different about the problem than one would have anytime one > has multiple reads and writes in an action -- it is up to the > analyst to specify the process sequence so that reads get the > proper values. Consider the generation of a creation event. No values are piped into it because there is no target instance. It must therefore appear by itself in a statement If data variables used to supply its supplemental data are write-many then it would be necessary to use guards to specify which value to use. You might get a situation like: A(one).x > ~x !a; !a: ~x | ... >~x !b; !b: gen B1(~x) !c; !c: ... > ~x; This is both very confusing and of no practical use. If we used a different variable in each statement then no guards would be required, provided the data variables carry sequential constraint. Sequential constraint is incompatible with write-many; so life is much easier if we restrict all data variables to be write-once. Dave. -- Dave Whipp, Embedded Systems Group GEC Plessey Semiconductors, Plymouth, United Kingdom. PL6 7BQ. tel. +44 (0)1752 693277 mailto:david.whipp@gpsemi.com fax. +44 (0)1752 693306 http://www.gpsemi.com Subject: Re: (SMU) SMALL: Event Generation lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > Consider the generation of a creation event. No values are > piped into it because there is no target instance. It must > therefore appear by itself in a statement > > If data variables used to supply its supplemental data are > write-many then it would be necessary to use guards to specify > which value to use. You might get a situation like: > > A(one).x > ~x !a; > !a: ~x | ... >~x !b; > !b: gen B1(~x) !c; > !c: ... > ~x; > > This is both very confusing and of no practical use. If we > used a different variable in each statement then no guards > would be required, provided the data variables carry > sequential constraint. Sequential constraint is incompatible > with write-many; so life is much easier if we restrict all > data variables to be write-once. OK, I thought you were saying it had to be write-once, even if guards were used. FWIW, I agree about the guards potentially being quite confusing and distracting in most cases. I also think they may be irrelevant once compilers for multiprocessors evolve to some reasonable optimization capability. That's why I lobbied to make the sequencing definition a separate part of the action specification. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) SMALL: Dataflows are Dataflows Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > I think this clarity and consistency comes into play when one > uses compound identifiers. Typically some elements of the > compound identifier will be shared and others won't be. This > embroils the analyst in lower level issues about which > identifiers need to be written and which don't. By using > link/unlink one raises the issue to the level of relationships > (i.e., which ones are active). The analyst can look at the IM > to verify that all the ducks are in a row (i.e., to determine > if another unlink/link pair is required). If the analyst > screws up, then the simulator can easily diagnose the problem. You seem to have a world-view that says that relationships are tangible; and that the data model is somehow a low-level implementation detail. I tend to view the model from the opposite perspective - the data is the high level representation. Using data, the analyst doesn't need to worry about the murky issues of the low level details of linking/unlinking a relationship. I, too, can play at low-level implementation details to justify my point of view. My view of relationship formalisation is to treat the referential attribute as the select signal to a multiplexor. It is wrong to play these implementation games though, because the OOA is not an implementation. Using Occams Razor, I feel that link/unlink, and the associated restrictions on attribute accesses, are a additional concepts that are unneccessary. The method is simpler without them. (Being unneccessary does not mean "not useful". However, the OOA is not the analysis. It is the formalisation of the analysis. As such, being consistant and unambiguous [and also passing the ham-sandwich test] are more important than having every possible useful feature) > To be consistent, I think both relationships need to be > deleted (unlinked) before redefining them. I submit this > is true even if one is just writing to relational > identifiers. One should write NOT PARTICIPATING to the > relational identifiers prior to changing the value. Firstly, I must point out that some referential attributes don't have this value in their attribute-domain. Secondly, Not-participating is, itself, a value. My original post, where I showed the problem of changing the value using link/unlink, could be re-written to have "not-participating" as the final value. My point still stands: if the relationship is defined by the data then the second unlink has no observable effect. The first unlink will have written "not participating" to the referential attribute. Thirdly, If the relationship is defined by the data; then what possible purpose is served by writing the intermediate value. Only when you start doing implementation is such a value useful; and then () it can be inserted by the translator. Dave. Not speaking for GPS. -- Dave Whipp, Embedded Systems Group GEC Plessey Semiconductors, Plymouth, United Kingdom. PL6 7BQ. tel. +44 (0)1752 693277 mailto:david.whipp@gpsemi.com fax. +44 (0)1752 693306 http://www.gpsemi.com Subject: Re: (SMU) SMALL: Dataflows are Dataflows peterf@pathfindersol.com (Peter J. Fontana) writes to shlaer-mellor-users: -------------------------------------------------------------------- At 08:51 AM 1/6/98 +0000, shlaer-mellor-users@projtech.com wrote: >Dave Whipp writes to shlaer-mellor-users: >-------------------------------------------------------------------- > ... >You seem to have a world-view that says that relationships >are tangible; and that the data model is somehow a low-level >implementation detail. I tend to view the model from the >opposite perspective - the data is the high level >representation. > ... >Using Occams Razor, I feel that link/unlink, and the >associated restrictions on attribute accesses, are a >additional concepts that are unneccessary. > ... >Only when you start doing implementation is such a >value useful; and then () it can be >inserted by the translator. Dave - excellent job on summarizing your viewpoint. I agree completely with your position. I believe the core issue here is the redundancy of the link/unlink. ____________________________________________________ Pathfinder Solutions Inc. www.pathfindersol.com | 888-OOA-PATH | | effective solutions for OOA/RD challenges | | Peter Fontana voice: +01 508-384-1392 | peterf@pathfindersol.com fax: +01 508-384-7906 | ____________________________________________________| Subject: Re: (SMU) SMALL: Dataflows are Dataflows lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > You seem to have a world-view that says that relationships > are tangible; and that the data model is somehow a low-level > implementation detail. I tend to view the model from the > opposite perspective - the data is the high level > representation. Using data, the analyst doesn't need to worry > about the murky issues of the low level details of > linking/unlinking a relationship. I must have really been inarticulate if you got the impression that I think that the IM is not a high level of representation! However, I _do_ regard a relationship as being an equivalent level of abstraction with an object in describing the data model. In addition I view attributes as being refinements of these two abstractions. In particular, I view relational attributes as a supporting refinement of the relationship abstraction. In this sense the relational attributes are at a lower level of abstraction than the relationship. Therefore, when I want to deal with interactions between objects, I would prefer to do so at the relationship level rather than the relational identifier level whenever possible. Speaking of arguing implementation, I seems to me that you regard relational identifiers as data elements. This assumes a particular class of implementation where the data stores are relational tables and the relationships are instantiated as foreign keys in those tables. In practice this is rarely the case, so I prefer to view them as abstractions that happen to use the relational table metaphor to allow the analyst to ensure relational integrity in the models. > Using Occams Razor, I feel that link/unlink, and the > associated restrictions on attribute accesses, are a > additional concepts that are unneccessary. The method > is simpler without them. > > (Being unneccessary does not mean "not useful". However, > the OOA is not the analysis. It is the formalisation of > the analysis. As such, being consistant and unambiguous > [and also passing the ham-sandwich test] are more > important than having every possible useful feature) I think "addition and unnecessary" is not a good characterization. They are an alternative to manipulating relational identifiers directly. The PT paper made this quite clear by eliminating the writes to relational identifiers. If one eliminates writes to relational identifiers, then there has to be another means of defining relationships and link/unlink is it. As I pointed out earlier, I can take any ADFD with relational identifier accessors and replace them with link, unlink, and reference processes without changing the functionality. In doing so, I can represent the navigations solely in terms of the high level relationship and object abstractions. > > To be consistent, I think both relationships need to be > > deleted (unlinked) before redefining them. I submit this > > is true even if one is just writing to relational > > identifiers. One should write NOT PARTICIPATING to the > > relational identifiers prior to changing the value. > > Firstly, I must point out that some referential attributes > don't have this value in their attribute-domain. My counter is that this would only be true for unconditional relationships that were never changed, in which case the issue is moot. I think this is the same issue as changing the identifier for an existing instance -- one should delete/create rather than simply writing a new identifier. > Secondly, Not-participating is, itself, a value. My original > post, where I showed the problem of changing the value using > link/unlink, could be re-written to have "not-participating" > as the final value. My point still stands: if the relationship > is defined by the data then the second unlink has no > observable effect. The first unlink will have written > "not participating" to the referential attribute. For the record, I have always disliked the idea of NOT PARTICIPATING since it first came up precisely because it prescribes an implementation.It seems to me that you are assuming a particular underlying implementation where there really is a single foreign key in a table. Whether the second unlink does something significant, does something redundant, or whether it does anything at all in the implementation depends upon what the underlying implementation is. The referential identifiers are abstractions, not necessarily data elements. I think that one of the advantages of link/unlink is that it makes this more clear. At the action language level each unlink is invoking a high level operation on a different model abstraction (i.e., a different relationship). This operation is not dependent upon the details of the relational identifiers (e.g., whether they are shared). This generality may result in nothing happening in the architecture for specific cases, but that is an issue for translation. As far as observability is concerned, I don't see a difference. When you write NOT PARTICIPATING to the relational attribute the observation is, "Both A/B and C/B relationships have been removed". My first unlink says, "relationship A/B has been removed" and my second unlink says. "relationship C/B has been removed". More verbose, but equivalent observability. > Thirdly, If the relationship is defined by the data; then > what possible purpose is served by writing the intermediate > value. Only when you start doing implementation is such a > value useful; and then () it can be > inserted by the translator. The value lies in robustness and generality. I agree that this is being done of the benefit of the implementation -- more specifically, the translator. However, there is a value in making oneself clear to the translator and in making the translator's life easier. The old relationships _must_ be removed before the new ones can be put in their place. In some implementations this may be a NOOP, but in most cases the architecture will have to do something specific. If you overload the write (or link) to also remove the old relationships as well as activate the new ones, this is asking for trouble. Functional isolation is just as valuable as data encapsulation. Now let me provide my counter example. Suppose I have an object with two relational identifiers: ref_1 (R1, R2) currently has value 5 ref_2 (R1) currently has value 1 I want to link R1 to a new instance where ref_1 = 4 and ref_2 = 1. I write a 4 to ref_1. The write accessor will dutifully remove the R1 and R2 relationships and activate new ones, with a different R2. There are two possibilities. I really wanted to have a new R2, so all is well. OTOH, I might really want to keep the old R2 around while I have the new R1. This is a mistake in the IM, which should be fixed. However, if I didn't look too carefully at the IM when I made the change, this error gets immortalized in the code. Now let's look at using link/unlink. For the first case where I also want to change R2, I unlink both and then link the new ones. Wordy, but Just Works. For the second case where I only unlink/link R1 (forgetting about R2), I get an error the first time I run the simulator down that path where it tries to do the link. This is not dependent upon how good my test cases are; it will blow up whenever the link is attempted (with a different ref_1). In both approaches the analyst has an obligation to think things out properly. Alas, this doesn't always work as well as one would hope. When it doesn't work I would prefer a wordier technique that found the error early to a more elegant technique that might not find the error until the software was in the field because my test cases didn't include the right combination of subsequent instance references. The ability to detect the error lies in the fact that the functionality is partitioned. When using a link, the simulator knows only one relationship is being linked so if there is a shared identifier any related instances had better match up. It can detect the fact that the state of the system is incorrect for the link when the analyst did not Do the Right Thing by invoking both unlinks before any links. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) SMALL: Link/Unlink Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > I must have really been inarticulate if you got the impression > that I think that the IM is not a high level of representation! > > However, I _do_ regard a relationship as being an equivalent level > of abstraction with an object in describing the data model. In > addition I view attributes as being refinements of these two > abstractions. [...] In this sense the relational attributes > are at a lower level of abstraction than the relationship. I think that this is one area where we slightly differ. I view the OOA model as a level of abstraction. Objects, relationships, attributes, states, processes, etc. all cooperate to form a single, consistant, model. There is no need to consider any elements as being higher, or lower, abstractions. To put it another way: OOA-of-OOA is a single-domain (with various subsystems). > Speaking of arguing implementation, I seems to me that you regard > relational identifiers as data elements. This assumes a particular > class of implementation where the data stores are relational tables > and the relationships are instantiated as foreign keys in those > tables. In practice this is rarely the case, so I prefer to view > them as abstractions that happen to use the relational table > metaphor to allow the analyst to ensure relational integrity in > the models. I agree with this paragraph completely. I do view referential attributes as data members; and I do regard them as an abstraction. I do not (always) _implement_ them as data members because link-lists are often more efficient. I do not blindly use link-lists though, because I often find that a table lookup _is_ more appropriate. In many situations, table lookup is the most efficient algorithm available. To summarise: my view of referential attributes as data does not effect the implementation I choose. > > Using Occams Razor [...] > I think "addition and unnecessary" is not a good characterization. > They are an alternative to manipulating relational identifiers > directly. The PT paper made this quite clear by eliminating the > writes to relational identifiers. If one eliminates writes to > relational identifiers, then there has to be another means of > defining relationships and link/unlink is it. This is true, but the restrictions to writing referential attributes are themselves unnecessary additions to the method. If you subtract link, unlink and the restrictions from the paper then you do not lose any functionality; but the method becomes simpler. > [...] > For the record, I have always disliked the idea of NOT > PARTICIPATING since it first came up precisely because it > prescribes an implementation. It seems to me that you are > assuming a particular underlying implementation where there > really is a single foreign key in a table. No, the idea of "not participating" does not prescribe an implementation. It is completely compatible with a link based implementation. [two unlinks on relationships with shared formalisation] > Whether the second unlink does something significant, does > something redundant, or whether it does anything at all in the > implementation depends upon what the underlying implementation > is. It shouldn't. It should be defined by the method. The only information that the method associates with a relationship is the referential attribute. As soon as the fist unlink has written this as NULL then the second link will have no effect. As I have said on many occasions: unless PT add some hidden variables to the formalism then relationships are completely defined by the referential attributes. The best you can hope for under the current formalsim is an architectural restriction that informs you that the second unlink (or lack of it) in this scenario does not conform to the method. That may be a perfectly reasonable pont of view for your project. > As far as observability is concerned, I don't see a difference. > When you write NOT PARTICIPATING to the relational attribute > the observation is, "Both A/B and C/B relationships have been > removed". My first unlink says, "relationship A/B has been > removed" and my second unlink says. "relationship C/B has > been removed". More verbose, but equivalent observability. Not quite. If you take the view that the formalising attribute is the relationship, then the first unlink will have set this attribute to "not participating". From the point of view of OOA, the second has no observable effect. Thus, if you write a model where you omit the second unlink, there will be no implication in simulation (But although there is no effect, you would define it to be an error!). > Now let me provide my counter example. Suppose I have an > object with two relational identifiers: > > ref_1 (R1, R2) currently has value 5 > ref_2 (R1) currently has value 1 > > I want to link R1 to a new instance where ref_1=4 and ref_2=1. > I write a 4 to ref_1. The write accessor will dutifully remove > the R1 and R2 relationships and activate new ones, with a > different R2. > > There are two possibilities. I really wanted to have a new R2, > so all is well. OTOH, I might really want to keep the old R2 > around while I have the new R1. This is a mistake in the IM, > which should be fixed. However, if I didn't look too carefully > at the IM when I made the change, this error gets immortalized > in the code. > [...] This is a meaningless example. By explicitly linking r1 and r2 in the data model, you are asserting that they are linked. If you need to have independent r1 and r2 then, as you say, the information model is wrong. What you are demonstrating is that if you think in terms of link/unlink, then the information model has some unintuitive properties. The intention of your example seems to be to demonstate that no formalism can prevent mistakes. The intention of my example was to show that the link concept is incompletely defined. This could be fixed (by defining it), but to do so may place undesirable restrictions on an architecture. You would either define that link/unlink do effect other relationships; or that they don't. Both points of view make architectural assumptions. When you use referential attributes, the same situation does not occur because the semantics (writing the attribute effects both relationships) do not permit the relationships to be considered independently in the OOA. Dave. Not speaking for GPS. -- Dave Whipp, Embedded Systems Group GEC Plessey Semiconductors, Plymouth, United Kingdom. PL6 7BQ. tel. +44 (0)1752 693277 mailto:david.whipp@gpsemi.com fax. +44 (0)1752 693306 http://www.gpsemi.com Subject: Re: (SMU) SMALL: Link/Unlink lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > I think that this is one area where we slightly differ. I view > the OOA model as a level of abstraction. Objects, relationships, > attributes, states, processes, etc. all cooperate to form a > single, consistant, model. There is no need to consider any > elements as being higher, or lower, abstractions. Yes, we do differ here. For example, it seems clear to me that ADFDs are at a much lower level of abstraction than STDs. Similarly, though to a much smaller degree, I initially think of an IM in terms of objects and relationships with little or no concern for the specific attributes. (I would guess that a third of our non-spec attributes get defined during STD or action code development.) So I see objects and relationships as a higher level of abstraction than attributes, just as ADFDs are refinements of STD state descriptions. > This is true, but the restrictions to writing referential > attributes are themselves unnecessary additions to the method. > If you subtract link, unlink and the restrictions from the > paper then you do not lose any functionality; but the method > becomes simpler. Less verbose, but not necessarily simpler. I still see them as equivalent alternatives so that the overall semantic content is the same. > No, the idea of "not participating" does not prescribe an > implementation. It is completely compatible with a link > based implementation. We disagree here also. I think it does prescribe an implementation because it is defines a value in the data domain of the referential attribute. Now the referential attribute is no longer a metaphor, it truly is a data element. I also do not think it is relevant in the link/unlink paradigm (more below). > [two unlinks on relationships with shared formalisation] > > Whether the second unlink does something significant, does > > something redundant, or whether it does anything at all in the > > implementation depends upon what the underlying implementation > > is. > > It shouldn't. > > It should be defined by the method. The only information that > the method associates with a relationship is the referential > attribute. As soon as the fist unlink has written this as NULL > then the second link will have no effect. As I have said > on many occasions: unless PT add some hidden variables to the > formalism then relationships are completely defined by the > referential attributes. We have a disconnect here. My paragraph says that what an OOA construct does or does not do in the implementation depends upon the particular implementation (architecture). This seems to me to be patently true for any OOA construct. The method definitely should not prescribe how an OOA construct is done in the implementation. I see the OOA constructs are abstractions that define specific requirements for the implementation but they do not determine how the implementation satisfies those requirements. It seems to me you are assuming a specific implementation where there is one concrete attribute value in the implementation for the relational identifier. In that specific architecture the second unlink would, indeed, have no effect. But in most practical implementations (e.g., that could properly deal with conditional relationships sharing identifiers in the OOA) things would not be so simplistic. There might be, for example, _two_ referential attributes where the unlink would write to only one and the link would check the state of the other before writing. The assertion that an unlink has no effect is only relevant to the specific implementation; it is not true at the OOA level (more below). > The best you can hope for under the current formalsim is an > architectural restriction that informs you that the second > unlink (or lack of it) in this scenario does not conform to > the method. That may be a perfectly reasonable pont of view > for your project. The second unlink cannot fail to conform to the method unless the relationship being removed does not exist. Since the first unlink did not remove the second unlink's relationship, there should be no problem unless there is an analysis error. This makes me think that you are assuming the unlink is just another form of write to the referential attribute. However, I view it as an alternative paradigm that has nothing to do with writing referential attributes. The first unlink removes the A/B relationship and the second removes the C/B relationship. Their implementation has to do this within the constraints defined by the referential attributes in the IM (e.g., if I link a relationship with a shared identifier, then the new identifier had better match up with the other relationship if it exists). But at the OOA level these are different operations, regardless of whether the referential identifier is shared in the IM. From the view of the OOA both operations produce different results (i.e., a different relationship is deactivated). In this paradigm the referential attributes are only written in the implementation, if at all. > > As far as observability is concerned, I don't see a difference. > > When you write NOT PARTICIPATING to the relational attribute > > the observation is, "Both A/B and C/B relationships have been > > removed". My first unlink says, "relationship A/B has been > > removed" and my second unlink says. "relationship C/B has > > been removed". More verbose, but equivalent observability. > > Not quite. If you take the view that the formalising attribute > is the relationship, then the first unlink will have set this > attribute to "not participating". From the point of view of > OOA, the second has no observable effect. Thus, if you write > a model where you omit the second unlink, there will be no > implication in simulation (But although there is no effect, > you would define it to be an error!). But I don't take that view. As soon as one is using link/unlink one is using a different paradigm for managing relationships. Put another way, writing NOT PARTICIPATING has no meaning in the link/unlink alternative because one cannot write to the referential attribute -- there, happily, is no NOT PARTICIPATING in the link/unlink paradigm. That notion is now relegated strictly to the implementation because in the OOA the view revolves around whether the individual relationships are active or not. Going back to your original example of swapping related instances having a shared identifier, I think link/unlink is more intuitive if the relationships are unconditional. The unconditionally suggests that NOT PARTICIPATING should not be relevant. However, one is left with that awkward moment within the action when one needs to remove the existing relationships before activating the new ones. This is conventionally ignored with the seriously overloaded legerdemain of simply zapping the new value into the shared referential attribute. I would argue that in this situation link/unlink are much more intuitive, albeit wordier, as I get rid of the old relationships and then define new ones. > > Now let me provide my counter example. Suppose I have an > > object with two relational identifiers: > > > > ref_1 (R1, R2) currently has value 5 > > ref_2 (R1) currently has value 1 > > > > I want to link R1 to a new instance where ref_1=4 and ref_2=1. > > I write a 4 to ref_1. The write accessor will dutifully remove > > the R1 and R2 relationships and activate new ones, with a > > different R2. > > > > There are two possibilities. I really wanted to have a new R2, > > so all is well. OTOH, I might really want to keep the old R2 > > around while I have the new R1. This is a mistake in the IM, > > which should be fixed. However, if I didn't look too carefully > > at the IM when I made the change, this error gets immortalized > > in the code. > > > [...] > > This is a meaningless example. By explicitly linking r1 and r2 > in the data model, you are asserting that they are linked. If > you need to have independent r1 and r2 then, as you say, the > information model is wrong. What you are demonstrating is > that if you think in terms of link/unlink, then the information > model has some unintuitive properties. > The intention of your example seems to be to demonstate that > no formalism can prevent mistakes. > I think you are missing my point. My assertion is the opposite: using link/unlink provides a more reliable means for detecting a particualr class of analyst errors than writing to referential attributes. My assumption (in the second case) is that there is an analyst error. [Note that the errors are different sides of the same coin. In the write case the error is that the R2 instance is incorrectly changed (i.e., an IM error) while in the link/unlink case the error is that the R2 instance was not corectly changed (i.e., a necessary unlink/link is ommitted).] If the analyst does not happen to notice the problem, possibly due to modest substance abuse, and modifies the R1 relationship without checking the IM properly, the issue becomes: how does one prevent this error from escaping? It can only be detected, when writing attributes, by having simulation use cases that will detect that the wrong R2 instance is present at some point after the R1 modification. However, in the link/unlink case the simulator can unequivocally detect the problem immediately as the R1 link is done if the wrong R2 instance is present. > The intention of my > example was to show that the link concept is incompletely > defined. This could be fixed (by defining it), but to do so > may place undesirable restrictions on an architecture. You > would either define that link/unlink do effect other > relationships; or that they don't. Both points of view make > architectural assumptions. I don't follow this. I do not see where link/unlink is incompletely defined. > When you use referential attributes, the same situation > does not occur because the semantics (writing the attribute > effects both relationships) do not permit the relationships > to be considered independently in the OOA. I do not see this distinction. In my example, if I want to change the instance for R1 and I am doing referential attribute writes, then I should check the IM and realize that the R2 attribute is going to change as a byproduct of my changing R1's instance. I would then have to verify that this is acceptable in the model context; if it isn't I need to change the IM or cange the R1 relationship elsewhere. If I am doing link/unlink, I have to look at the IM and realize that I need to also unlink and link the R2 instance. I would have to verify that this is acceptable in the model context; if it isn't I need to change the IM or change the R1 relationship elsewhere. The only difference is what happens if I am hung over and fail to look at the IM. In the first case my negligence might lead to a user law suit, termination, destitution, divorce, and sleeping in refrigerator cartons. But in the second case my faithful simulator will find the problem the first time my new link is executed and, if no one happens to be looking as I fix the problem, I will be able to continue basking in the adulation of my peers for never making a misteak. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) SMALL: Dataflows are Dataflows smf@cix.compulink.co.uk (Mike Finn) writes to shlaer-mellor-users: -------------------------------------------------------------------- Apologies for not responding earlier. My modem had an argument with a passing thunderstorm and lost. Dave Whipp wrote: > Mike Finn wrote: > > 2) Decide Whipp's model has no meaning in the Real World and > > thus is an irrelevant construct. I can't find any example > > like this in the texts. Nor can I think of one. > Try Fig 3.8 on page 15 of the OOA96 Report. Thanks, I hadn't spotted this diagram. Although it does show a object's attribute formalizing two relationships simultaneously, I think it's significantly different. The Department_id attribute in all three objects share the same data domain and Student.Department_id is also an identifying attribute. Therefore, I really can't see what effect link/unlink via R3 could possibly have since Student.Department_id is not allowed to change. > You will also > find many such formalisations in complex subtype structures; > though these don't have the same problems with link/unlink. Looking at Figure 2.6.3 in the OL book. When an object migrates, there is no problem if link/unlink have a null effect. Mike -- Mike Finn Dark Matter | Email: smf@cix.co.uk Systems Ltd | Voice: +44 (0) 1483 755145 Subject: Re: (SMU) SMALL: Dataflows are Dataflows smf@cix.compulink.co.uk (Mike Finn) writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lahman... > See my response to Whipp for more detail. The crucial issue here is > that the unlinks are missing. If the existing relationships > are removed prior to linking the new ones, then the links > have the independence you seek. I forgot to include the effect of unlink but it makes no difference. Dave Whipp's model again: > Consider a model with 3 objects (A, B and C), with relationships > r1 and r2 between A/B and B/C respectively. For this demonstration, > I will define a referential attribute in B to formalise both > relationships (for simplicity, there are no compound identifiers). > Suppose there are two instances of A; two instances of C; and one > instance of B. Let the identifiers of the instances of both A and > C be "1" and "2". The value of the referential attribute in B > will before be either "1" or "2". > Initially, the value of the referential attribute is "1": my > B is linked to instance "1" of A and instance "1" of C. This time I execute "(refB, refA1) | unlink R1" as you suggest. What effect has this statement had? In section 6.13, page 19 of the SMALL paper, it talks talks about "Creating an Instance of a Relationship" with the link operator. This does not make any sense to me. There are no "instances of a relationship" in OOA. Excluding associative objects, relationships are only formalized with a referential attribute in one of the two objects concerned. However, one way to implement link/unlink would be to create an extra object between each pair of related objects that would hold just the identifiers of both objects. This could be the proper place for such "instances of a relationship", but it's outside OOA. You can continue adding these extra objects for as long as you like and eventually you'll create a fractal IM. :-) Mike -- Mike Finn Dark Matter | Email: smf@cix.co.uk Systems Ltd | Voice: +44 (0) 1483 755145 Subject: Re: (SMU) SMALL: Dataflows are Dataflows smf@cix.compulink.co.uk (Mike Finn) writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lahman...responding to Lahman... > > For the benefit of lurkers who might get the wrong idea, this is not > > legal. If there are no compound identifiers, then this says that > > instances of A and C have the same identifier, which is a no-no. At > > least one of them would have to have a compound identifier. But this > > is > > a quibble that is not relevant to the argument. > Rather imprecise. The situation would be legal if A and C were subtypes > of the same supertype. You were correct first time. If A and C were subtypes then the same real world instance would have to exist in both with the same identifier value. Mike -- Mike Finn Dark Matter | Email: smf@cix.co.uk Systems Ltd | Voice: +44 (0) 1483 755145 Subject: Re: (SMU) SMALL: Link/Unlink Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- I appologise for not resonding to every made. The post was 5 pages long and I would have no time for work if I responded to everything. My aim is mainly to clarify the points I made in my previous post. lahman wrote: > For example, it seems clear to me that ADFDs are at a much lower > level of abstraction than STDs. Similarly, though to a much > smaller degree, [...], I see objects and relationships as a > higher level of abstraction than attributes, just as > ADFDs are refinements of STD state descriptions. I think there may be a language difference here - there is a difference between adding detail and lowering abstraction. Attributes add detail to objects, but are part of the same abstraction. Similarly, I view states and processes and simply added details, not different abstractions. > > If you subtract link, unlink and the restrictions from the > > paper then you do not lose any functionality; but the method > > becomes simpler. > > Less verbose, but not necessarily simpler. I still see them as > equivalent alternatives so that the overall semantic content is the > same. When I say: "the method becomes simpler", I do not mean that the models become simpler. I mean that the formalism (the OOA-of-OOA) is simpler. > > No, the idea of "not participating" does not prescribe an > > implementation. It is completely compatible with a link > > based implementation. > > We disagree here also. I think it does prescribe an implementation > because it is defines a value in the data domain of the referential > attribute. Now the referential attribute is no longer a metaphor, > it truly is a data element. I am not sure how to respond to this. Its the sort of thing that is so fundamental to recursive design that its difficult to find rational argument. So rather than trying, I'll just mention the SES generic architecture from their Genesis product. It allows you to color (define properties for) relationships to control whether they are implemented using linked lists or lookup tables. The fact that it works demonstrates that the data model does not prescribe the implementation. [...] > We have a disconnect here. My paragraph says that what an OOA > construct does or does not do in the implementation depends upon > the particular implementation (architecture). This seems to me > to be patently true for any OOA construct. If your implementation implements a delete-everything process instead of a create accessor, then that behaviour is, indeed, defined by the implementation. But it would obviously be wrong wrt the method. The method must define a concept sufficiently to be able to determine whether or not an implementation is valid. My original statement should be read in this spirit. [...] >I think you are missing my point. My assertion is the opposite: > using link/unlink provides a more reliable means for detecting > a particualr class of analyst errors than writing to referential > attributes. The problem with this statement is that the class of error that you identified is only possible if you are thinking in terms of link/unlink. So you are arguing for a paradigm on the basis that it can detect a class of error that it intoduces. Hardly a compelling argument. > If I want to change the instance for R1 and am doing > referential writes then I should check the IM and realise > that the R2 attribute is going to change as a _byproduct_ > of my changing R1's instance I have empesised the word "byproduct" because this is a key misunderstanding. I do not not regard this effect as a byproduct. It is a fact. If you write a value to an attribute then the value of the attribute changes. This should not be a suprise. > I think link/unlink is more intuitive if the relationships > are unconditional. The unconditionally suggests that NOT > PARTICIPATING should not be relevant. However, one is left > with that awkward moment within the action when one needs to > remove the existing relationships before activating the new > ones. This is conventionally ignored with the seriously > overloaded legerdemain of simply zapping the new value into > the shared referential attribute. But if you don't think in terms of link/unlink then there is no need to blank the referential attributes before writing a new value. The write operation is not overloaded - it does a single task - it sets the value of the attribute. An implementation may require it to do more that this (and less!!!) but that is an implementation issue. Consider a simple relationship between "country" and "person" objects. The relationship is formalised in the attribute: "country.president". I can change the value of country.president from "Clinton" to "Gore" without needing an intermediate value of "none". I regard the operation as entirely intuitive. Where do the concepts of linking and unlinking come in? Any state changes within the participating object-instances will be handled by their state models. Dave. Not speaking for GPS. -- Dave Whipp, Embedded Systems Group GEC Plessey Semiconductors, Plymouth, United Kingdom. PL6 7BQ. tel. +44 (0)1752 693277 mailto:david.whipp@gpsemi.com fax. +44 (0)1752 693306 http://www.gpsemi.com Subject: Re: (SMU) SMALL: Dataflows are Dataflows lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Finn... > You were correct first time. If A and C were subtypes then the > same real world instance would have to exist in both with the same > identifier value. Yes, sometimes even talking to oneself is of no help. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) SMALL: Dataflows are Dataflows lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Finn... > This time I execute "(refB, refA1) | unlink R1" as you suggest. > > What effect has this statement had? It has removed the relationship between the A and B instances. > In section 6.13, page 19 of the SMALL paper, it talks talks about > "Creating an Instance of a Relationship" with the link operator. > > This does not make any sense to me. There are no "instances of a > relationship" in OOA. Excluding associative objects, relationships > are only formalized with a referential attribute in one of the two > objects concerned. I hope they did not mean this literally in the sense of a relationship _always_ being some sort of object in the architecture. In the architecture there has to be some mechanism for keeping track of, navigating across, and preserving the integrity of relationships. Depending upon the context there are lots of mechanisms for maintaining relationships. These range from literally using foreign keys in relational tables to maintaining sorted arrays of pointers. There may or may not be an additional architectural object, such as an array, that is instantiated. So I assumed the statement was referring to selecting or activating a particular architectural mechanism for the relationship. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) Levels of abstraction lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp (SMALL: Link/unlink)... > I appologise for not resonding to every made. The post was 5 > pages long and I would have no time for work if I responded > to everything. My aim is mainly to clarify the points I made > in my previous post. You have a day job? Part of the problem is that we keep opening new boxes. I am going to start splitting this up. > lahman wrote: > > For example, it seems clear to me that ADFDs are at a much lower > > level of abstraction than STDs. Similarly, though to a much > > smaller degree, [...], I see objects and relationships as a > > higher level of abstraction than attributes, just as > > ADFDs are refinements of STD state descriptions. > > I think there may be a language difference here - there is a > difference between adding detail and lowering abstraction. > Attributes add detail to objects, but are part of the same > abstraction. Similarly, I view states and processes and > simply added details, not different abstractions. We differ here quite a bit. In my view the nature of an abstraction is that it is a summarization of detail. Therefore the details have to be at a different level of abstraction. If you have a fractal-like hierarchy of detail, then each level of summarization is a different level of abstraction. > When I say: "the method becomes simpler", I do not mean that > the models become simpler. I mean that the formalism (the > OOA-of-OOA) is simpler. At the risk of opening a whole new Pandora's Box on OOA-of-OOA abstractions, How So? I would think that all that changes is the labels on the relationships between some objects (i.e., "is related through" becomes "is linked by"). If one adds new subtypes for Process (e.g., a Link Process), one would already have needed to subtype Write Accessor for the difference in relational vs. non-relational attributes, so those could just be renamed. (I don't see much need for the subtyping; the data would not be different and the functionality is an implementation issue. So I think the subtypes would only be justified on the basis of the need for unique relationships.) My point here is that link/unlink is a mechanism for activating relationships that is equivalent to writing to a relational attribute. (It is also a separate mechanism from the one that ensures relational integrity, which must exist in terms of identifiers.) By the time this mechanism gets abstracted into an OOA-of-OOA of the methodology, it is not clear to me that one could tell the difference. It seems to me that the difference would only become apparent if one were modeling the notation syntax. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) NOT PARTICIPATING lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp (from SMALL:link/unlink)... > > > No, the idea of "not participating" does not prescribe an > > > implementation. It is completely compatible with a link > > > based implementation. > > > > We disagree here also. I think it does prescribe an implementation > > because it is defines a value in the data domain of the referential > > attribute. Now the referential attribute is no longer a metaphor, > > it truly is a data element. > > I am not sure how to respond to this. Its the sort of thing > that is so fundamental to recursive design that its difficult > to find rational argument. So rather than trying, I'll just > mention the SES generic architecture from their Genesis > product. It allows you to color (define properties for) > relationships to control whether they are implemented using > linked lists or lookup tables. > > The fact that it works demonstrates that the data model does not > prescribe the implementation. They work because the implementation is supporting relational integrity in a different manner than that described in the notation. Let's say I have a relationship between A and B that is A:B::1c:Mc. It is fine for the architecture to support a 1c:Mc by placing a pointer to a linked list on the A side. This is strictly a performance issue. As it happens a NULL link list pointer satisfies the need for NOT PARTICIPATING from the A side viewpoint. However, the architecture also has to support getting from a B to the related A. The B side must have a mechanism to do this. One way to do this would be to search all the A instances' linked lists for the relevant B instance. This satisfies the relational integrity issues for the identifier metaphor. But it is not legal to use this mechanism so long as NOT PARTICIPATING is required. To support NOT PARTICIPATING there must be an attribute of some sort in B to hold the value that the methodology has specified must exist if the relationship is not active from the particular B. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) SMALL: Link/Unlink lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > >I think you are missing my point. My assertion is the opposite: > > using link/unlink provides a more reliable means for detecting > > a particualr class of analyst errors than writing to referential > > attributes. > > The problem with this statement is that the class of error that > you identified is only possible if you are thinking in terms > of link/unlink. So you are arguing for a paradigm on the > basis that it can detect a class of error that it intoduces. > Hardly a compelling argument. I think you have to look at the example again. There is a possible analysis error when using _both_ approaches. The errors are different in detail, but they both stem from failing to properly account for the shared identifier when assigning a relationship. That is, the analyst failed to look at the IM and note the implications. Given this, my assertion stands -- link/unlink provides better error detection. > > If I want to change the instance for R1 and am doing > > referential writes then I should check the IM and realise > > that the R2 attribute is going to change as a _byproduct_ > > of my changing R1's instance > > I have empesised the word "byproduct" because this is a > key misunderstanding. I do not not regard this effect as > a byproduct. It is a fact. If you write a value to an > attribute then the value of the attribute changes. This > should not be a suprise. This underscores an interesting difference in our viewpoints. I am looking at this from the position of what kinds of trouble an analyst can get into during the normal course of development. It is not uncommon when writing state actions to find that one must change change a relationship. At that moment the analyst is thinking only in terms of the new instance that needs to be related to the one in hand. The analyst is focused on that particular relationship. This is particularly true when making maintenance type changes. If the analyst does not look at the IM and note the shared identifier, the analyst can screw up. This is true regardless of which technique is used. One can argue that one _should_ look at the IM, but the reality is that people are not perfect and mistakes are made. Given that this type of mistake can be made, I regard it as a persuasive argument that link/unlink provides a more reliable mechanism for detecting the error. > > I think link/unlink is more intuitive if the relationships > > are unconditional. The unconditionally suggests that NOT > > PARTICIPATING should not be relevant. However, one is left > > with that awkward moment within the action when one needs to > > remove the existing relationships before activating the new > > ones. This is conventionally ignored with the seriously > > overloaded legerdemain of simply zapping the new value into > > the shared referential attribute. > > But if you don't think in terms of link/unlink then there is > no need to blank the referential attributes before writing a > new value. The write operation is not overloaded - it does a > single task - it sets the value of the attribute. An > implementation may require it to do more that this (and > less!!!) but that is an implementation issue. You have used the phrase, "if you don't think in terms of link/unlink..." several times. If link/unlink is the specified alternative defined in the action language, you don't have a choice about thinking about it. I could turn this around and argue that you seem to always be thinking in terms of writing attributes when talking about link/unlink. Don't Do That and the pain may go away. I contend that writing the attribute is not so simplistic. It is a metaphor for much more complex activity. I believe that it is undeniable that writing that attribute is changing relationships. That mean it is removing existing relationships and it is creating new ones at the instance level. This is important at the analysis level. You cannot hand simulate or verify relational loops unless you are aware of this at the OOA level. Similarly, it would be terminally naive to think that writing NOT PARTICIPATING to a shared attribute for two conditional relationships was :just setting an attribute value". Besides, you were arguing earlier in this thread that it was, indeed, modifying the relationships. You can't have your cake and eat it too. > Consider a simple relationship between "country" and "person" > objects. The relationship is formalised in the attribute: > "country.president". I can change the value of country.president > from "Clinton" to "Gore" without needing an intermediate > value of "none". I regard the operation as entirely intuitive. > Where do the concepts of linking and unlinking come in? Any > state changes within the participating object-instances > will be handled by their state models. What the relational attribute metaphor is really doing is assassinating one relationship and usurping it with another. The Clinton relationship muddled before the operation and the Gore relationship would muddle afterwards. Clearly the state of the system is significantly different, depending upon which relationship is muddling. In this particular case it is also clear that the country could get along with no relationship. All this underscores the fact that assassination and usurping are very different activities that deserve partitioning. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) NOT PARTICIPATING Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > They work because the implementation is supporting relational > integrity in a different manner than that described in the > notation. Let's say I have a relationship between A and B that > is A:B::1c:Mc. It is fine for the architecture to support a > 1c:Mc by placing a pointer to a linked list on the A side. > This is strictly a performance issue. As it happens a NULL > link list pointer satisfies the need for NOT PARTICIPATING from > the A side viewpoint. > > However, the architecture also has to support getting from a B > to the related A. The B side must have a mechanism to do this. > One way to do this would be to search all the A > instances' linked lists for the relevant B instance. This > satisfies the relational integrity issues for the identifier > metaphor. But it is not legal to use this mechanism so long as > NOT PARTICIPATING is required. To support NOT PARTICIPATING > there must be an attribute of some sort in B to hold the value > that the methodology has specified must exist if the > relationship is not active from the particular B. Now I see where the disagreement is. The fact that the abstraction uses NOT-PARTICIPATING does not require the implementation to store it. Only the externaly interface must be preserved. Even if you have an external wormhole that wants to see NOT PARTICIPATING this does not require you to use it internally. You could use your suggested implementation In general, it is better to use accesssor functions than to read data members directly. When you use an accessor funtion, this hides the internal representation of the attribute. Your naive implementation might be id_a_T* B::get_attr_ref1() { for(iter_A i = A::instances.begin; i != A::instances.end; i++) { for(iter_B j = i->r1.begin; j != i->r1.end; j++) { if (*j==this) return i->get_attr_id(); } } return 0; } This function would return a value that is consistant with the data model (where NULL == NOT_PARTICIPATING). So even if you are very strict about requiring an implementation to support not-participating, your implementation would be legal. But the fact that the OOA abstraction requires NOT PARTICIPATING does not require the implementation to support it, so you would not have this type of thing in production code. Even with your naive implemenation, the return statement in the function would read a pointer to A, not to the referential attribute. So some of the performace can be clawed back. When running code in debug mode it can be useful to have debug functions to print out the values of attributes. For these functions it is adequate to have a low performance algorithm such as the one above. Dave. -- Dave Whipp, Embedded Systems Group GEC Plessey Semiconductors, Plymouth, United Kingdom. PL6 7BQ. tel. +44 (0)1752 693277 mailto:david.whipp@gpsemi.com fax. +44 (0)1752 693306 http://www.gpsemi.com Subject: Re: (SMU) Levels of abstraction "Leslie Munday" writes to shlaer-mellor-users: -------------------------------------------------------------------- -----Original Message----- From: lahman To: shlaer-mellor-users@projtech.com Date: Friday, January 09, 1998 3:46 PM Subject: Re: (SMU) Levels of abstraction >lahman writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >> lahman wrote: >> > For example, it seems clear to me that ADFDs are at a much lower >> > level of abstraction than STDs. Similarly, though to a much >> > smaller degree, [...], I see objects and relationships as a >> > higher level of abstraction than attributes, just as >> > ADFDs are refinements of STD state descriptions. >> >> I think there may be a language difference here - there is a >> difference between adding detail and lowering abstraction. >> Attributes add detail to objects, but are part of the same >> abstraction. Similarly, I view states and processes and >> simply added details, not different abstractions. > >We differ here quite a bit. In my view the nature of an abstraction is >that it is a summarization of detail. Therefore the details have to be >at a different level of abstraction. If you have a fractal-like >hierarchy of detail, then each level of summarization is a different >level of abstraction. > >> When I say: "the method becomes simpler", I do not mean that >> the models become simpler. I mean that the formalism (the >> OOA-of-OOA) is simpler. > >At the risk of opening a whole new Pandora's Box on OOA-of-OOA >abstractions, How So? > I like this discussion. Let's keep it going for a little while. I first heard of the term 'Level of Abstraction' in terms of software, when I was a 'software engineer' in 1989. I understood it to mean something to do with the amount of detail being presented. After reading the above discussion, I find myself a little confused. When you change the level of abstraction, are you, 1) just adding more detail (more design) to the model that exists, or 2) are levels of abstraction predefined levels in modeling a system. I.e. can LOA be sort of continuous or are they more discrete? For example, if I have an Object diagram and I add state transitions diagrams, have I changed the level of abstraction? Similarly, if I have a Shlaer/Mellor object diagram and I add details to the relationships, like UML class diagrams, to describe aggregation etc, and the number of instances involved in each reside of the relationship, have I changed the LOA also? Actually, as I write this e-mail, I'm starting to answer my questions, and one thing I will argue at this point is that a level of abstraction must be consistent across the whole model. So adding detail to a single object does not change the LOA. The same detail must be added to all. I guess that moves the concept away from the continuous and more into the realms of a discrete concept. Until Monday, Leslie. Subject: Re: (SMU) Levels of abstraction David.Whipp@gpsemi.com (Dave Whipp) writes to shlaer-mellor-users: -------------------------------------------------------------------- I can best explore the nature of abstraction by looking at the difference between analogue and digital electronics. These two ways of looking at a system differ in their view of a signal value. In the analogue domain, the value is a continuous range; in the digital domain it is one of a number of discrete values. An implementation maps values in one domain to values in the other. For example, logic-1 may be mapped to a range of values from 4.5 to 5.5 volts; or it may be a range from 2.7 to 3.6 volts. The mapping between values in the digital domain to values in the analogue domain is 1c:M. This means that although every digital value has an analogue counterpart, the reverse is not true. There are analogue phenomena that have no digital counterpart. The last relevant point is that a digital description of a digital system is complete. The digital description is adequate to describe the behaviour seen by a digital interface. This does not mean that all the behaviour can be understood in terms of digital events; just that it can be abstracted and described. For example, simulators will contain models for edge speed and signal propagation times. Analogue domain simulations calibrate these abstractions; but the abstraction is adequate for digital simulation. The point of this discussion is to identify the role of abstraction within OOA. The most important feature is that an abstraction should be adequate to sustain a complete system description from the point of view of that abstraction. The other important feature is that the mapping from a high-level to a low-level abstraction is 1c:M. Every feature in the high level abstraction can be explained in terms of one or more low level features; but the reverse is not true. So, does the concept of an object form a complete abstraction? The answer must be "no". A set of object names does not describe the system dynamics; and it is unrelated to the system interfaces. Even if you add attributes, you still do not get a complete view of the behaviour of the system. It is necessary to describe the activity at the level of the process model before a complete system description can be developed, because it is only at this level that interaction with the interfaces is described. My feeling is that it is necessary to provide a mathematical (formal) description of the effect of processes on data before the system description is complete. However, this description is not an implementation. Analogously, as the effect of an edge in the digital domain (a period of uncertainty) can be described without reference to its true analogue meaning. When working within the digital domain, it is often possible to abstract timing details in the early phases of design. Unit delay models are frequently used when precise timing details are not required. However, it is recognised that the unit delay model is not a true abstraction. It is a postponement of detail. It is only possible to get very accurate timings after the design is complete and layout information can be used to extract physical parameters such as track capacitances and lengths. At this point, accurate timings can be calculated and these can be fed back into the model to see if it works. The reason for claiming that the unit-delay model is not a true abstraction is that the back-annotation of timing detail does nothing more than tweak the values of the timing parameters. In the unit delay model, all timings are set to the same value (1 unit). As design progresses, the same timing parameters are maintained, but their values are refined. If we compare this with the object model, you could claim that parameters are not maintained - attributes, states, etc. are added throughout the period of the model's construction. However, on closer examination, you realise that the identification of objects actually identifies the attributes, etc., at the same time - you just have to discover what they are. If you discover that they have the wrong attributes or behaviour then you attempt to identify different objects. Dave. Not speaking for GPS. -- Dave Whipp, Embedded Systems Group GEC Plessey Semiconductors, Plymouth, United Kingdom. PL6 7BQ. tel. +44 (0)1752 693277 mailto:david.whipp@gpsemi.com fax. +44 (0)1752 693306 http://www.gpsemi.com Subject: Re: (SMU) SMALL: Link/Unlink David.Whipp@gpsemi.com (Dave Whipp) writes to shlaer-mellor-users: -------------------------------------------------------------------- Lahman wrote: > I think you have to look at the example again. There is a > possible analysis error when using _both_ approaches. The > errors are different in detail, but they both stem from > failing to properly account for the shared identifier when > assigning a relationship. That is, the analyst failed to look > at the IM and note the implications. Given this, my assertion > stands -- link/unlink provides better error detection. I stand by my assertion. The error you identify is conceptual. It does, indeed, exist independently of the notation; but it is a result of the thought processes of the analyst. Your error is that a modeller wants to link two objects together using a relationship; but doesn't realise that a second relationship is tied to the first. Writing a value to the referential attribute, in this scenario, has the unexpected side effect of changing two relationships. This error exists because the analyst attempted to link two objects - and focused on the relationship. It is the focus on the relationship that leads to the mistake. Consider a different scenario: I'll reuse my previous example. The analyst is working on a model and finds an object called "country" with an attribute "president". Realising that an event is received with the recently elected president as supplemental data, the analyst decides to write this name to the "president" attribute. The former president is replaced with the new one ("The king is dead: Long live the King!") The analyst in this situation didn't realise that that attribute just written is a referential attribute - indeed, it might formalise many relationships. It really doesn't matter. The required result was to set the value of the attribute, not to link instances together. That is why I said your error was a result of the mindset. I do not doubt that the are errors that can exist for the data modeller that are inconceivable to the relationship modeller. But that was not the point of this subthread. Sometimes, the information modelling process may indicate that it is necessary to focus on the relationship, rather that the participating objects. Even without the introduction of link/unlink, the method provides a means to do this. I'll get back to that in just a moment. > You have used the phrase, "if you don't think in terms of > link/unlink..." several times. If link/unlink is the specified > alternative defined in the action language, you don't have a > choice about thinking about it. I could turn this around and > argue that you seem to always be thinking in terms of writing > attributes when talking about link/unlink. Don't Do That and > the pain may go away. I agree with this paragraph 100.0%. Indeed, it is the point that I have been trying to make. I have said on a number of occasions in this thread that to make link/unlink fully compatible with the OIM, the OIM should be changed to replace referential attributes with relationship information structures. Alternatively, link/unlink should be abolished and a consistent abstraction of referential attributes used for both information and process models. However, as I said a moment ago, the method already provides the means of eliminating referential attributes from participating objects: and it does not require the introduction of link/unlink. The mechanism is, of course, the associative object. The creation of an associative object links the relationship; and its deletion unlinks the relationship. When you use an associative object, there is no need to use NOT PARTICIPATING (which is a good reason for always placing an associative object on 1c:1c, 1c:M and 1c:Mc relationships). Given that the method, pre-SMALL, provided the choice of both styles of model: what is the benefit of introducing a new mechanism, as an addition, that eliminates that choice? > I contend that writing the attribute is not so simplistic. It > is a metaphor for much more complex activity. I believe that > it is undeniable that writing that attribute is changing > relationships. That means it is removing existing > relationships and it is creating new ones at the instance > level. This is important at the analysis level. You cannot > hand simulate or verify relational loops unless you are aware > of this at the OOA level. Similarly, it would be terminally > naive to think that writing NOT PARTICIPATING to a shared > attribute for two conditional relationships was "just setting > an attribute value". Perhaps I am terminally naive, but from the point of view of OOA, I do believe that writing NOT PARTICIPATING is "just setting an attribute value". Within the abstraction, there is nothing complex about writing an attribute. There is absolutely no difference between writing to a descriptive attribute and writing to a referential attribute (that should answer your OOA-of-OOA question in another thread). At the analysis level, there is no concept of "removing existing relationships and it is creating new ones". It is only when this high level action is mapped onto a lower level abstraction, that the true complexity of the operation is revealed. > Besides, you were arguing earlier in this thread that it was, > indeed, modifying the relationships. You can't have your cake > and eat it too. I am arguing that, within the OOA abstraction, referential attributes do define the relationship. You have agreed on a number of occasions (though you keep changing your story) that the only information within the OOA that describes relationships is the referential attribute. You often seem to confuse the issue by saying that link and unlink are able to manipulate relationships independently of the referential attributes without introducing any hidden variables - I contend that this is impossible (a hidden associative object is required). If referential attributes and relationships are synonymous within the abstraction: then I have every right to say that modifying the attribute does, indeed, modify the relationship. Let me finish by summarising the changes I would make to SMALL wrt this discussion, on the basis that they add nothing to the method: - Eliminate restrictions on reading arbitrary valued attributes - Eliminate restrictions on reading and writing referential attributes - Eliminate Link and Unlink process types - Consider eliminating NOT PARTICIPATING in favour of associative objects - Consider eliminating the Migrate process type - it isn't powerful enough for multilevel subtype relationships, where many objects may be created and destroyed during the migration. + Consider mandating that identifiers are constants. Dave. Not speaking for GPS. -- Dave Whipp, Embedded Systems Group GEC Plessey Semiconductors, Plymouth, United Kingdom. PL6 7BQ. tel. +44 (0)1752 693277 mailto:david.whipp@gpsemi.com fax. +44 (0)1752 693306 http://www.gpsemi.com Subject: (SMU) NOT PARTICIPATING David.Whipp@gpsemi.com (Dave Whipp) writes to shlaer-mellor-users: -------------------------------------------------------------------- Lahman wrote: > They work because the implementation is supporting relational > integrity in a different manner than that described in the > notation. Let's say I have a relationship between A and B that > is A:B::1c:Mc. It is fine for the architecture to support a > 1c:Mc by placing a pointer to a linked list on the A side. > This is strictly a performance issue. As it happens a NULL > link list pointer satisfies the need for NOT PARTICIPATING from > the A side viewpoint. > > However, the architecture also has to support getting from a B > to the related A. The B side must have a mechanism to do this. > One way to do this would be to search all the A > instances' linked lists for the relevant B instance. This > satisfies the relational integrity issues for the identifier > metaphor. But it is not legal to use this mechanism so long as > NOT PARTICIPATING is required. To support NOT PARTICIPATING > there must be an attribute of some sort in B to hold the value > that the methodology has specified must exist if the > relationship is not active from the particular B. Now I see where the disagreement is. The fact that the abstraction uses NOT-PARTICIPATING does not require the implementation to store it. Only the externaly interface must be preserved. Even if you have an external wormhole that wants to see NOT PARTICIPATING this does not require you to use it internally. You could use your suggested implementation In general, it is better to use accesssor functions than to read data members directly. When you use an accessor funtion, this hides the internal representation of the attribute. Your naive implementation might be id_a_T* B::get_attr_ref1() { for(iter_A i = A::instances.begin; i != A::instances.end; i++) { for(iter_B j = i->r1.begin; j != i->r1.end; j++) { if (*j==this) return i->get_attr_id(); } } return 0; } This function would return a value that is consistant with the data model (where NULL == NOT_PARTICIPATING). So even if you are very strict about requiring an implementation to support not-participating, your implementation would be legal. But the fact that the OOA abstraction requires NOT PARTICIPATING does not require the implementation to support it, so you would not have this type of thing in production code. Even with your naive implemenation, the return statement in the function would read a pointer to A, not to the referential attribute. So some of the performace can be clawed back. When running code in debug mode it can be useful to have debug functions to print out the values of attributes. For these functions it is adequate to have a low performance algorithm such as the one above. Dave. -- Dave Whipp, Embedded Systems Group GEC Plessey Semiconductors, Plymouth, United Kingdom. PL6 7BQ. tel. +44 (0)1752 693277 mailto:david.whipp@gpsemi.com fax. +44 (0)1752 693306 http://www.gpsemi.com Subject: Re: (SMU) SMALL: Dataflows are Dataflows smf@cix.compulink.co.uk (Mike Finn) writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lahman... > > This time I execute "(refB, refA1) | unlink R1" as you suggest. > > > > What effect has this statement had? > It has removed the relationship between the A and B instances. You have deliberately answered the question I wrote and not the one I meant to ask. But I won't continue with this since the model is not a good example from which to proceed. Mike -- Mike Finn Dark Matter | Email: smf@cix.co.uk Systems Ltd | Voice: +44 (0) 1483 755145 Subject: Re: (SMU) SMALL: Link/Unlink smf@cix.compulink.co.uk (Mike Finn) writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lahman... > I could turn this around and argue that you > seem to always be thinking in terms of writing attributes when talking > about link/unlink. Don't Do That and the pain may go away. I think this is my problem. To my mind, link/unlink can *only* change attributes (in instances at the OOA level) and nothing else. Are you saying data elsewhere is recording the use of link/unlink? Mike -- Mike Finn Dark Matter | Email: smf@cix.co.uk Systems Ltd | Voice: +44 (0) 1483 755145 Subject: Re: (SMU) SMALL: Link/Unlink Mike Morrin writes to shlaer-mellor-users: -------------------------------------------------------------------- smf@cix.compulink.co.uk (Mike Finn) writes to shlaer-mellor-users: -------------------------------------------------------------------- >Responding to Lahman... > >> I could turn this around and argue that you >> seem to always be thinking in terms of writing attributes when talking >> about link/unlink. Don't Do That and the pain may go away. > >I think this is my problem. To my mind, link/unlink can *only* change >attributes (in instances at the OOA level) and nothing else. > >Are you saying data elsewhere is recording the use of link/unlink? I think that there is! My perception is that the relationship itself is a representation of a boolean 'attribute' which is not idependatnt of the referential attribute within one of the related objects. It would be my preference if this 'hidden' attribute could be tested, independent of the referential, making the NOT PARTICIPATING value redundant. To clarify, I would propose that before reading the referential attribute of a conditional relationship, you would first test to see that the relationshiop is linked. It might be argued that this complicates the action, but to me it feels like a real world complication, and no more complicated than handling the special case of NOT PARTICIPATING after reading the referential. regards, Mike Morrin Subject: Re: (SMU) Levels of abstraction lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Munday... > I like this discussion. Let's keep it going for a little while. No problem. > I first heard of the term 'Level of Abstraction' in terms of software, > when > I was a 'software engineer' in 1989. I understood it to mean something > to do > with the amount of detail being presented. I agree here. Once one throws out the definitions related to the Arts, those remaining tend to deal with abstraction as a summarization of detail. (I actually did look it up before the original post. ) > After reading the above discussion, I find myself a little confused. > When > you change the level of abstraction, are you, 1) just adding more > detail > (more design) to the model that exists, or 2) are levels of > abstraction > predefined levels in modeling a system. All of the above. My position is that attributes are an example of adding detail to an existing abstraction (i.e., an object). OTOH, I would argue that ADFDs are a predefined modeling level that provides detail for STDs. Basically, there is nothing to prevent one from having different levels of abstraction. Some of these levels may be conveniently be presented as different formal levels of abstraction (i.e., different diagrams, as in the STD/ADFD case) where one's viewpoint is significantly changed while... > I.e. can LOA be sort of continuous or are they more discrete? others, such as attributes, may be closer to the fractal analogy in that they don't significantly change one's viewpoint but they do change the level of comprehension of what is being viewed by fleshing it out. So I would view attributes as a near continuous refinement for objects and relational attributes as a near continuous refinement of relationships. Meanwhile, the jump form STDs to ADFDs is more discrete. At the STD level one is interested in large scale flow of control in the application. At the ADFD level one is interested in detailed processing. > For example, if I have an Object diagram and I add state transitions > diagrams, have I changed the level of abstraction? Similarly, if I > have a > Shlaer/Mellor object diagram and I add details to the relationships, > like > UML class diagrams, to describe aggregation etc, and the number of > instances > involved in each reside of the relationship, have I changed the LOA > also? I would regard going from the IM to the STD as being quite different levels of abstraction certainly and, therefore, discrete. Whether they are different _levels_ is perhaps somewhat moot. The fact that each STD provides detail for a single object suggests strongly that they are at a different level. However, they are so different (data vs. process) that the level might not be particularly relevant. In answer to the second part, my simplistic answer is yes. Certainly this would be true if one regarded such UML refinements as being colorization for the translation. This would be the case since the implementation is clearly at a lower level of detail. If you are adding them as an adjunct to the OOA, then that is a no-no. One thing that pretty much written in stone is that the level of abstraction for an OOA is at a higher level than the implementation and they are separated by Translation. > Actually, as I write this e-mail, I'm starting to answer my questions, > and > one thing I will argue at this point is that a level of abstraction > must be > consistent across the whole model. So adding detail to a single object > does > not change the LOA. The same detail must be added to all. I guess that > moves > the concept away from the continuous and more into the realms of a > discrete > concept. If you regard each diagram as a different different model I would agree with you. If you regard the OOA as a whole to be a single model, then I don't. I don't see anything inherently wrong with a model having multiple levels of abstraction. For example, I can think useful, high-level thoughts about the overall data relationships in an application by simply considering the objects and their relationships -- without a passing thought to attributes or even relational identifiers. Similarly, when I am trying to isolate a problem my initial pass is with an STD because I don't care about data store reads or writes and the other details. I want to isolate the problem to an action first. Once I have done that I care about the details within the action. When one models something as complex as a software application I think the models have to reflect varying levels of detail because bring order to the complexity one has to view the application at various levels of summarization. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) NOT PARTICIPATING lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > Now I see where the disagreement is. The fact that the abstraction > uses NOT-PARTICIPATING does not require the implementation to > store it. Only the externaly interface must be preserved. Even > if you have an external wormhole that wants to see NOT > PARTICIPATING this does not require you to use it internally. > You could use your suggested implementation I still disagree here. As soon as ref1 has a specific value in its data domain, then it is fair for me to operate on that attribute directly, such as if (B.ref1 == NOT PARTICIPATING) then .... or tmp = B.get_attr_ref1() if (tmp == NOT PARTICIPATING) then ... To me this means that it must be implemented as a data value. More specifically, I should be able to determine whether the relationship is active from B's data alone (e.g., without referencing any As, such as navigating R1 and testing if something were found). Also, there is the typing issue in the second case; tmp must have a data type that is compatible with the comparison to the constant. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) NOT PARTICIPATING Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > I still disagree here. As soon as ref1 has a specific value in > its data domain, then it is fair for me to operate on that > attribute directly, such as > > if (B.ref1 == NOT PARTICIPATING) then > .... > > or > > tmp = B.get_attr_ref1() > if (tmp == NOT PARTICIPATING) then > ... > To me this means that it must be implemented as a data value. More > specifically, I should be able to determine whether the > relationship is active from B's data alone It is reasonable for you to do this in the OOA (using appropriate action language) but not necessarily in the implementation. An implementation could implement your first example as: if (B.ref1_isNotParticipating()) then ... This additional predicate could eliminate NOT PARTICIPATING from the data-domain of B.ref1. or, as is more likely: if (B->link_r1 == NULL) then ... or even if (B.r1.length() == 0) then ... It may be that you store your relationships in a class that is independent of B. In this case, you might need to write something along the lines of: if (! rel_r1::IsParticipatingWithB(B)) then ... This discussion appear stems from the apparant misconception that OOA is a design notation. It is not. You could make similar arguments by claiming that the number of states in a state model must be the same as the number of states in the implementation's; or that this defines the number of bits in the state variable. Such ideas are demonstrably false. Dave. Not speaking for GPS. -- Dave Whipp, Embedded Systems Group GEC Plessey Semiconductors, Plymouth, United Kingdom. PL6 7BQ. tel. +44 (0)1752 693277 mailto:david.whipp@gpsemi.com fax. +44 (0)1752 693306 http://www.gpsemi.com Subject: Re: (SMU) Levels of abstraction lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > I can best explore the nature of abstraction by looking at the > difference between analogue and digital electronics. These two > ways of looking at a system differ in their view of a signal > value. In the analogue domain, the value is a continuous range; > in the digital domain it is one of a number of discrete values. > > An implementation maps values in one domain to values in the > other. For example, logic-1 may be mapped to a range of values > from 4.5 to 5.5 volts; or it may be a range from 2.7 to 3.6 > volts. > > The mapping between values in the digital domain to values in > the analogue domain is 1c:M. This means that although every > digital value has an analogue counterpart, the reverse is not > true. There are analogue phenomena that have no digital > counterpart. I agree that this is a valid abstraction. Where I diverge is the assumption that it is the only valid form of abstraction. > The point of this discussion is to identify the role of > abstraction within OOA. The most important feature is that an > abstraction should be adequate to sustain a complete system > description from the point of view of that abstraction. The > other important feature is that the mapping from a high-level to > a low-level abstraction is 1c:M. Every feature in the high level > abstraction can be explained in terms of one or more low level > features; but the reverse is not true. I do not agree with this when both levels of abstraction describe the same thing. An OOA contains different levels of abstraction of the same thing. Therefore, at best the relationship is unconditional. For example, if my STD action description says, "Find a blue Frimmet", this will translate into a particular set of ADFD processes that are executed in a particular sequence. These are determined by the IM and only that set of processes dictated by the IM exist in the context of the action. One and only one set of instances will satisfy the STD pseudocode and they will produce one and only one result (a blue Frimmet). Now consider a relationship (which describes at a high level how classes of entities interact) in the IM that requires a single relational identifier (which which provides the more detailed description of which particular instances of those classes interact). This is a 1:1 relationship. > So, does the concept of an object form a complete abstraction? > The answer must be "no". A set of object names does not describe > the system dynamics; and it is unrelated to the system > interfaces. Even if you add attributes, you still do not get a > complete view of the behaviour of the system. It is necessary to > describe the activity at the level of the process model before a > complete system description can be developed, because it is only > at this level that interaction with the interfaces is described. > My feeling is that it is necessary to provide a mathematical > (formal) description of the effect of processes on data before > the system description is complete. However, this description is > not an implementation. Analogously, as the effect of an edge in > the digital domain (a period of uncertainty) can be described > without reference to its true analogue meaning. Here is another point of disagreement. I do not consider an object to be simply a Name. An object is an entity that exists regardless of whether I have yet determined what its attributes are. At the very least it is a standalone abstraction of a particular class of data store. I can also define What an object _is_ in terms of the real world problem space without knowing its attributes. Moreover, I can make high level statements about How it is related to other real entities in the system. When we make IMs we always start by identifying and _defining_ objects and their relationships without regard to attributes or identifiers. Once we have a handle on the objects and relations we develop identifiers and the last thing we do is develop attributes (though many of these fall out from an initial object blitz). When we do that first step we are dealing with perfectly valid abstractions of the real world problem space and this is independent of the details of identifiers and attributes. > When working within the digital domain, it is often possible to > abstract timing details in the early phases of design. Unit > delay models are frequently used when precise timing details are > not required. However, it is recognised that the unit delay > model is not a true abstraction. It is a postponement of detail. > It is only possible to get very accurate timings after the > design is complete and layout information can be used to extract > physical parameters such as track capacitances and lengths. At > this point, accurate timings can be calculated and these can be > fed back into the model to see if it works. > > The reason for claiming that the unit-delay model is not a true > abstraction is that the back-annotation of timing detail does > nothing more than tweak the values of the timing parameters. In > the unit delay model, all timings are set to the same value (1 > unit). As design progresses, the same timing parameters are > maintained, but their values are refined. > > If we compare this with the object model, you could claim that > parameters are not maintained - attributes, states, etc. are > added throughout the period of the model's construction. > However, on closer examination, you realise that the > identification of objects actually identifies the attributes, > etc., at the same time - you just have to discover what they > are. If you discover that they have the wrong attributes or > behaviour then you attempt to identify different objects. I have no disagreement with this per se. However, I see it merely as an argument that refinement through developing lower level abstractions can highlight imperfections in the initial development of the higher level abstractions. Hey, nobody's perfect. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) NOT PARTICIPATING lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > It is reasonable for you to do this in the OOA (using appropriate > action language) but not necessarily in the implementation. An > implementation could implement your first example as: > > if (B.ref1_isNotParticipating()) then ... > > This additional predicate could eliminate NOT PARTICIPATING from > the data-domain of B.ref1. > > or, as is more likely: > > if (B->link_r1 == NULL) then ... > > or even > > if (B.r1.length() == 0) then ... This is the heart of where we disagree on this on. I think these last two are precluded in the implementation because they depend upon navigation of the relationship. When the specific value is attributed to the attribute, this implies that information is self-contained in the attribute itself. The analogy is with a loop control variable in C. The language requires that it be defined as a particular type outside the loop. However, this can impede optimization because the compiler must use that type (say, an integer index) even when another type (say, address pointer) might be more efficient. If I look at the value in the debugger during the iteration, it has to be interpreted as the defined type. (Some compilers will go ahead and optimize the type anyway, but they are incorrect implementations of C.) Similarly, if I am debugging the OOA system in MSVC++, I have to be able to look at NOT PARTICIPATING when I examine that object because the methodology says it will be there. If, however, the loop control variable is not explicitly typed, as in BLISS, then the compiler is free to use whatever implementation is best. When I look at the value in the debugger, I have to adjust to what the compiler defined. Similarly, I can't expect to see an attribute with a particular value when I am debugging the OOA application in MSVC++ if NOT PARTICIPATING was not a defined value. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) SMALL: Link/Unlink lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Finn... > I think this is my problem. To my mind, link/unlink can *only* > change attributes (in instances at the OOA level) and nothing else. > > Are you saying data elsewhere is recording the use of link/unlink? Basically, yes. I think link/unlink represents an higher level of abstraction for activating relationships that writing relational attributes. This makes it clearer that fairly arbitrary mechanisms in the architecture can be used to implement. BUT, whatever mechanism is selected during translation must still preserve relational integrity AS IF the metaphor of writing identifiers had been used. That is, the relational identifiers in the IM still must be the basis for deciding who is dancing with whom. In practice, even when using the identifier-writing metaphor the actual implementation would rarely actually use writing data attributes as a mechanism fro performance reasons. Aside from the sticky issue (in my view but not Whipp's) of NOT PARTICIPATING, the two metaphors select from among the same arbitrary implementation mechanisms in the underlying architecture. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) NOT PARTICIPATING Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > The analogy is with a loop control variable in C. The language > requires that it be defined as a particular type outside the > loop. However, this can impede optimization because the compiler > must use that type (say, an integer index) even when another type > (say, address pointer) might be more efficient. If I look at the > value in the debugger during the iteration, it has to be > interpreted as the defined type. (Some compilers will go ahead > and optimize the type anyway, but they are incorrect > implementations of C.) I don't think the C standard does mandate that optimised code is compatible with your debugger. If the loop variable is not used outside the loop then an implementation is allowed to optimise it away. In fact, if the first operation on the loop variable outside the loop is, under all conditions, a write: then the compiler can optimise the loop variable even when it is used after the loop. > Similarly, if I am debugging the OOA system in MSVC++, I have to > be able to look at NOT PARTICIPATING when I examine that object > because the methodology says it will be there. The methodology does not say that it is there. An architecture defines what you can see in the debug build of your code. I am not aware of any restrictions on what an architecture can do, provided the observable behaviour is valid under the execution rules of OOA. The term "observable behaviour" relates to observation through OOA interfaces, not implementation interfaces such as a debugger. Even a wormhole to an implementation domain does not expose the inner implementation, because the interface would be mapped in a bridge. If you do an optimised build, then a compiler is free to optimise out your architectural features - so even those may not be viewable in your debugger. This email is probably just restating what you disagree with; but it really is fundamental to the other debates we are having. If the method really does constrain the implementation to the degree that you suggest then: 1. I would agree that link/unlink is essential for many implementations. 2. Recursive design would be a meaningless concept 3. I, and many others, wouldn't use the method. Dave. Not speaking for GPS. -- Dave Whipp, Embedded Systems Group GEC Plessey Semiconductors, Plymouth, United Kingdom. PL6 7BQ. tel. +44 (0)1752 693277 mailto:david.whipp@gpsemi.com fax. +44 (0)1752 693306 http://www.gpsemi.com Subject: Re: (SMU) SMALL: Link/Unlink lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > Given this, my assertion > > stands -- link/unlink provides better error detection. > > I stand by my assertion. The error you identify is conceptual. > It does, indeed, exist independently of the notation; but it is > a result of the thought processes of the analyst. > > Your error is that a modeller wants to link two objects together > using a relationship; but doesn't realise that a second > relationship is tied to the first. Writing a value to the > referential attribute, in this scenario, has the unexpected side > effect of changing two relationships. > > This error exists because the analyst attempted to link two > objects - and focused on the relationship. It is the focus on > the relationship that leads to the mistake. Consider a different > scenario: I'll reuse my previous example. The analyst is > working on a model and finds an object called "country" with an > attribute "president". Realising that an event is received with > the recently elected president as supplemental data, the analyst > decides to write this name to the "president" attribute. The > former president is replaced with the new one ("The king is > dead: Long live the King!") > > The analyst in this situation didn't realise that that attribute > just written is a referential attribute - indeed, it might > formalise many relationships. It really doesn't matter. The > required result was to set the value of the attribute, not to > link instances together. That is why I said your error was a > result of the mindset. I do not doubt that the are errors that > can exist for the data modeller that are inconceivable to the > relationship modeller. But that was not the point of this > subthread. > > Sometimes, the information modelling process may indicate that > it is necessary to focus on the relationship, rather that the > participating objects. Even without the introduction of > link/unlink, the method provides a means to do this. I'll get > back to that in just a moment. I do not see any distinction. In both mindsets the core problem is that the analyst did not look at the IM. This has nothing to do with mindsets. It has to do with carelessness. Using either metaphor the analyst has a responsibility to examine the IM for potential side effects. GIVEN that the error has been made, link/unlink still provides a more reliable mechanism for detecting the error. > > You have used the phrase, "if you don't think in terms of > > link/unlink..." several times. If link/unlink is the specified > > alternative defined in the action language, you don't have a > > choice about thinking about it. I could turn this around and > > argue that you seem to always be thinking in terms of writing > > attributes when talking about link/unlink. Don't Do That and > > the pain may go away. > > I agree with this paragraph 100.0%. Indeed, it is the point that > I have been trying to make. I have said on a number of occasions > in this thread that to make link/unlink fully compatible with > the OIM, the OIM should be changed to replace referential > attributes with relationship information structures. Well, we miscommunicated somewhere, because I thought the "relationship information structure" _was_ the suite of relational identifiers. However, I do not see the need for abandoning the relational identifiers in favor an androgynous reference-style structure abstraction. At the IM level the identifiers are quite useful for ensuring the path integrity of relational loops. Also, the idea of sharing some identifiers among relationship may be significant in the problem space. I don't see how you do these things with an abstraction like a Relationship Information Structure as an attribute. Render unto the IM the things that are the IM's and render unto the action language the things that do links. So long as the use of references in actions is restricted (i.e., they are locally derived from identifiers and aren't passed out of scope), then they are nothing more than a syntactically convenient surrogate for identifiers. Thus I do not see them as incompatible or mutually exclusive. > Alternatively, link/unlink should be abolished and a consistent > abstraction of referential attributes used for both information > and process models. > > However, as I said a moment ago, the method already provides the > means of eliminating referential attributes from participating > objects: and it does not require the introduction of > link/unlink. The mechanism is, of course, the associative > object. The creation of an associative object links the > relationship; and its deletion unlinks the relationship. When > you use an associative object, there is no need to use NOT > PARTICIPATING (which is a good reason for always placing an > associative object on 1c:1c, 1c:M and 1c:Mc relationships). > > Given that the method, pre-SMALL, provided the choice of both > styles of model: what is the benefit of introducing a new > mechanism, as an addition, that eliminates that choice? I think the answer is that it is a lot less klutzy than using associative objects. (Though I agree associative objects are preferable to NOT PARTICIPATING.) Overall, I think the benefits are: (1) It is consistent with what the analyst is actually doing in an action -- activating/deactivating relationships on an individual basis. (2) It is a higher level of abstraction because it unburdens the analyst with the details of particular identifiers. This is particularly true when compound identifiers are used. (3) The use of relationship references within an action tends to be more compact and readable. (4) It improves the chances of detecting certain types of analyst errors. (5) It removes the ambiguity of NOT PARTICIPATING for shared conditional relationships. (6) I find the paradigm of navigating to a reference and then performing relationship operations (e.g., unlink) on it to be more intuitive, albeit wordier, than than writing a value to a relational attribute. > > I contend that writing the attribute is not so simplistic. It > > is a metaphor for much more complex activity. I believe that > > it is undeniable that writing that attribute is changing > > relationships. That means it is removing existing > > relationships and it is creating new ones at the instance > > level. This is important at the analysis level. You cannot > > hand simulate or verify relational loops unless you are aware > > of this at the OOA level. Similarly, it would be terminally > > naive to think that writing NOT PARTICIPATING to a shared > > attribute for two conditional relationships was "just setting > > an attribute value". > > Perhaps I am terminally naive, but from the point of view of > OOA, I do believe that writing NOT PARTICIPATING is "just > setting an attribute value". Within the abstraction, there is > nothing complex about writing an attribute. There is absolutely > no difference between writing to a descriptive attribute and > writing to a referential attribute (that should answer your > OOA-of-OOA question in another thread). At the analysis level, > there is no concept of "removing existing relationships and it > is creating new ones". It is only when this high level action is > mapped onto a lower level abstraction, that the true complexity > of the operation is revealed. We are poles apart on this one. At the analysis level there are two reasons for writing to an attribute: you want to store a value or you want to modify a relationship. In the first case the syntactic artifact of the notation is to write to a data attribute but the semantics is to store a value. In the second case the syntactic artifact of the notation is to write a value to a referential attribute but the semantics of the analysis is to modify a relationship. In the first case the syntax is fairly closely related to the actual implementation's data store so that one can be fairly confident that the value will actually be written somewhere. In the second case the syntax is tenuously related to the actual implementation so that the write is largely symbolic and one cannot even count on a value being written. Thus the attribute write has no real significance, other than the requirement that relational integrity is somehow maintained. However, the semantics of both activities are quite clear in the analyst's mind. > I am arguing that, within the OOA abstraction, referential > attributes do define the relationship. You have agreed on a > number of occasions (though you keep changing your story) that > the only information within the OOA that describes relationships > is the referential attribute. You often seem to confuse the > issue by saying that link and unlink are able to manipulate > relationships independently of the referential attributes > without introducing any hidden variables - I contend that this > is impossible (a hidden associative object is required). If > referential attributes and relationships are synonymous within > the abstraction: then I have every right to say that modifying > the attribute does, indeed, modify the relationship. First, I never agreed to your second sentence. While I agree that referential attributes are essential to maintaining relational integrity in the model, I do not consider them to be the definition of the relationship. The relationship is defined in the IM with the cardinality, conditionality, label descriptions, and relationship description. What I think they do define, at a different level of abstraction, is the specific instances involved the the relationship. Second, I believe that the referential attributes in the IM define constraining requirements that any implementation of link/unlink (and references) must satisfy. Therefore link/unlink are not independent of relational identifiers -- in their implementation. However, at the level of state actions in the OOA there is no need for detailed references to relational attributes, so the relational identifiers are transparent at this level of abstraction in the OOA. This I regard as a virtue of link/unlink. Finally, your last sentence seems to contradict other statements you have made. For example, "The write operation is not overloaded -- it does a simple task -- it sets the value of the attribute. An implementation may require it to more than this...." from 1/9/98. This was in response to my assertion that the write was overloaded because it performed two analysis tasks -- removing the existing relationships and installing new ones. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) SMALL: Link/Unlink lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Morrin... > My perception is that the relationship itself is a representation of a > > boolean 'attribute' which is not idependatnt of the referential > attribute > within one of the related objects. > > It would be my preference if this 'hidden' attribute could be tested, > independent of the referential, making the NOT PARTICIPATING value > redundant. To clarify, I would propose that before reading the > referential > attribute of a conditional relationship, you would first test to see > that the > relationshiop is linked. > > It might be argued that this complicates the action, but to me it > feels like a > real world complication, and no more complicated than handling the > special case of NOT PARTICIPATING after reading the referential. I think if you use the link/unlink/reference paradigm, then you already has a fairly general mechanism for doing this. You navigate the relationship and then test the resulting reference for UNDEFINED or some such. If the navigate results in a set, you can check the member count for zero. (I don't have the paper handy and I don't recall if they defined UNDEFINED as a valid reference value, but if not they should. [This is a different situation than NOT PARTICIPATING, Dave, because the value is not stored so it is a syntactic artifact only.]) This can easily be implemented in a variety of ways to optimize performance. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) NOT PARTICIPATING smf@cix.compulink.co.uk (Mike Finn) writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lahman... > They work because the implementation is supporting relational integrity > in a different manner than that described in the notation. Let's say I > have a relationship between A and B that is A:B::1c:Mc. It is fine for > the architecture to support a 1c:Mc by placing a pointer to a linked > list on the A side. This is strictly a performance issue. As it > happens a NULL link list pointer satisfies the need for NOT > PARTICIPATING from the A side viewpoint. Can you explain what A:B::1c:Mc means? The text indicates that A is on the many side. Should this be B:A::1c:Mc? > However, the architecture also has to support getting from a B to the > related A. The B side must have a mechanism to do this. One dumb> way to do this would be to search all the A instances' linked > lists for the relevant B instance. This satisfies the relational > integrity issues for the identifier metaphor. But it is not legal to > use this mechanism so long as NOT PARTICIPATING is required. To support > NOT PARTICIPATING there must be an attribute of some sort in B to hold > the value that the methodology has specified must exist if the > relationship is not active from the particular B. I would go a long way to avoid using NOT PARTICIPATING in the analysis. This problem goes away if an Associative Object is used as Whipp suggests. I sometimes prefer to use subtypes to eliminate the conditionality. I have no problem with the idea of NOT PARTICIPATING in the Architecture, where it can appear as a NULL pointer, arbitrary id of 0, etc. Mike -- Mike Finn Dark Matter | Email: smf@cix.co.uk Systems Ltd | Voice: +44 (0) 1483 755145 Subject: Re: (SMU) SMALL: Link/Unlink smf@cix.compulink.co.uk (Mike Finn) writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Mike Morrin... > >Are you saying data elsewhere is recording the use of link/unlink? > I think that there is! Well I think that there is not! > My perception is that the relationship itself is a representation of a > boolean 'attribute' which is not idependatnt of the referential attribute > within one of the related objects. You have to be very careful about what happens in OOA and what happens in the Architecture (implementation). > It would be my preference if this 'hidden' attribute could be tested, > independent of the referential, making the NOT PARTICIPATING value > redundant. To clarify, I would propose that before reading the referential > attribute of a conditional relationship, you would first test to see that the > relationshiop is linked. Link/unlink operate in OOA, while what you describe above could be considered for implementation in the Architecture. The 'hidden' attribute would never appear in the OOA. Instead it could be a field in a record in a linked list in the Architecture. Some down sides to your plan are: it would require more memory, it's more complex and it may not even increase performance, due to the need to update the new field. Mike -- Mike Finn Dark Matter | Email: smf@cix.co.uk Systems Ltd | Voice: +44 (0) 1483 755145 Subject: Re: (SMU) SMALL: Link/Unlink smf@cix.compulink.co.uk (Mike Finn) writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to David Whipp... > I am arguing that, within the OOA abstraction, referential > attributes do define the relationship. You have agreed on a > number of occasions (though you keep changing your story) that > the only information within the OOA that describes relationships > is the referential attribute. You often seem to confuse the > issue by saying that link and unlink are able to manipulate > relationships independently of the referential attributes > without introducing any hidden variables - I contend that this > is impossible (a hidden associative object is required). If > referential attributes and relationships are synonymous within > the abstraction: then I have every right to say that modifying > the attribute does, indeed, modify the relationship. Excellent! > Let me finish by summarising the changes I would make to SMALL > wrt this discussion, on the basis that they add nothing to the > method: > - Eliminate restrictions on reading arbitrary valued attributes I assume you mean: Eliminate restrictions on reading identifying attributes of type arbitrary. But remember the values of these attributes still have no meaning in the OOA. Same for the one below. > - Eliminate restrictions on reading and writing referential > attributes > - Eliminate Link and Unlink process types Yes! > - Consider eliminating NOT PARTICIPATING in favour of > associative objects And also subtypes. > - Consider eliminating the Migrate process type - it isn't > powerful enough for multilevel subtype relationships, where > many objects may be created and destroyed during the > migration. > + Consider mandating that identifiers are constants. I always thought they were! Mike -- Mike Finn Dark Matter | Email: smf@cix.co.uk Systems Ltd | Voice: +44 (0) 1483 755145 Subject: Re: (SMU) NOT PARTICIPATING lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > I don't think the C standard does mandate that optimised code > is compatible with your debugger. If the loop variable is not > used outside the loop then an implementation is allowed to > optimise it away. > > In fact, if the first operation on the loop variable outside > the loop is, under all conditions, a write: then the compiler > can optimise the loop variable even when it is used after the > loop. No, C does not make the loop control variable part of the "for", "while" or "do" constructs, and that is the problem. The data type is determined outside the loop and the compiler is not free to change that (i.e., it can't change the semantics of the data store from integer value to pointer to integer). > > Similarly, if I am debugging the OOA system in MSVC++, I have to > > be able to look at NOT PARTICIPATING when I examine that object > > because the methodology says it will be there. > > The methodology does not say that it is there. An architecture > defines what you can see in the debug build of your code. I > am not aware of any restrictions on what an architecture > can do, provided the observable behaviour is valid under the > execution rules of OOA. The restriction on the architecture is that the referential attribute has at least one specific value defined by the methodology. To accommodate that data value there must be a corresponding data store in the implementation that is associated with the instance and the type of that data store must be consistent with the value of NOT PARTICIPATING. The point is that freedom of your "observable behavior" is only available so long as the referential attribute and associated identifiers may be symbolic. They cease to be so when they start to take on specific data values. If I choose to define an object identifier as some tangible semantic, such as a temperature value, that places a restriction on the architecture about how it can implement the data store. It can no longer, for example, store instances in an array of structs where the instance identifier is the index in the order that the instances were added. The architecture now _must_ have an attribute for the temperature in each instance. It cannot be optimized out as an array index. [The architecture can still do the array and maintain referential integrity through indices, but it will also have to associate that the temperature attribute with a particular array index if one wants referential attributes to use the index for performance.] The same thing is true for relational attributes. If they have concrete values that will necessarily restrict the way that the data store is implemented. A concrete value has to be observable. > This email is probably just restating what you disagree with; > but it really is fundamental to the other debates we are > having. If the method really does constrain the implementation > to the degree that you suggest then: > > 1. I would agree that link/unlink is essential for many > implementations. > 2. Recursive design would be a meaningless concept > 3. I, and many others, wouldn't use the method. While I think that the introduction of NOT PARTICIPATING was unfortunate, I don't think the first two conclusions follow. Sure, link/unlink would raise things back to a more symbolic or metaphorical level of abstraction where the translation was unbridled with restrictions. However, I think the over specification represented by NOT PARTICIPATING only presents a methodology glitch in the relative rare case of shared identifiers for conditional relationships. All other situations work just fine without link/unlink at the OOA level. Insofar as (2) is concerned, it would very rarely hinder the architecture in practice because in the vast majority of cases relationships are implemented as pointers where a NULL value is legitimate and can be defined as NOT PARTICIPATING in the architecture. The pointer is still a data store element in the instance and the value is observable whenever the relationship is not active, which satisfies the restriction that I see placed on the architecture by specifying a specific data value must exist. Even in those very rare situations where it would limit the architecture, it does not preclude implementation (though it might add modest complication, such as two-way pointers). So I think it is a bit much to say that introducing NOT PARTICIPATING has rendered recursive design meaningless. Every OOA construct places restrictions on the architecture. Without NOT PARTICIPATING, the relational identifiers are a very high level abstraction that place minimal restrictions on the implementation. But the implementation is still restricted to mechanisms that will safely preserve the relational integrity of the relational data model (which would not necessarily be true for, say, OMT). Adding NOT PARTICIPATING vicariously introduced a new restriction, but I think that it is more of a minor annoyance more than a catastrophic failure for RD. Having said all this, you were just pulling my chain, right? You just wanted to see me finessed into defending NOT PARTICIPATING! -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) SMALL: Link/Unlink Mike Morrin writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Mike Finn -------------------------------------------------------------------- Responding to Mike Morrin... >> >Are you saying data elsewhere is recording the use of link/unlink? >> I think that there is! >Well I think that there is not! A good basis for discussion. >> My perception is that the relationship itself is a representation of a >> boolean 'attribute' which is not idependatnt of the referential attribute >> within one of the related objects. >You have to be very careful about what happens in OOA and what > happens in the Architecture (implementation). Yes, but see below. >> It would be my preference if this 'hidden' attribute could be tested, >> independent of the referential, making the NOT PARTICIPATING value >> redundant. To clarify, I would propose that before reading the referential >> attribute of a conditional relationship, you would first test to see that the >> relationshiop is linked. >Link/unlink operate in OOA, while what you describe above could be >considered for implementation in the Architecture. No, I was referring to an analysis concept NOT implementation. I regret calling it a 'hidden attribute', as it is not really hidden, nor an attribute. It is information which does not appear in the OIM, but is (or should be) visible in the behavioural description of the system, as is the cardinality of instances of an object (yes I suppose I am describing cardinality of instances of a relationship). Pehaps the point I was really trying to make is that I REALLY don't like NOT PARTICIPATING as a concept, and I would prefer to make it illegal (even a runtime error) to read a referential attribute and get that value. I think that analysts should be forced to test for and deal with the special case before reading the referential. regards, Mike Subject: Re: (SMU) NOT PARTICIPATING lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Finn.... > > They work because the implementation is supporting relational > integrity > > in a different manner than that described in the notation. Let's > say I > > have a relationship between A and B that is A:B::1c:Mc. It is fine > for > > the architecture to support a 1c:Mc by placing a pointer to a linked > > > list on the A side. This is strictly a performance issue. As it > > happens a NULL link list pointer satisfies the need for NOT > > PARTICIPATING from the A side viewpoint. > > Can you explain what A:B::1c:Mc means? The text indicates that A > is on the many side. Should this be B:A::1c:Mc? You are correct, the text is inconsistent. I did mean A<------------>B 1c Mc but the text should have reversed the As and Bs. Unfortunately, as I look at the quoted text below, the point I was trying to make is a nonsequitur because it assumes the identifier is on the A side, which it couldn't be. To get from A to B, one could search the B's regardless of NOT PARTICIPATING because one would be looking for an A identifier only. > > However, the architecture also has to support getting from a B to > the > > related A. The B side must have a mechanism to do this. One > dumb> way to do this would be to search all the A instances' linked > > lists for the relevant B instance. This satisfies the relational > > integrity issues for the identifier metaphor. But it is not legal > to > > use this mechanism so long as NOT PARTICIPATING is required. To > support > > NOT PARTICIPATING there must be an attribute of some sort in B to > hold > > the value that the methodology has specified must exist if the > > relationship is not active from the particular B. > > I would go a long way to avoid using NOT PARTICIPATING in the > analysis. This problem goes away if an Associative Object is > used as Whipp suggests. I sometimes prefer to use subtypes to > eliminate the conditionality. > > I have no problem with the idea of NOT PARTICIPATING in the > Architecture, where it can appear as a NULL pointer, arbitrary > id of 0, etc. I agree about not using it, but perhaps for different reasons. The only case where it simply does not work is when two conditional relationships from a object share a relational identifier in that object. Even in that case it may not be relevant because truly temporal conditionality (relationships that come and go while the instances remain in existence) are pretty rare. So I regard this as more of a methodological glitch than a major failing. The reason I would not like to use it is because it makes a lot more work for the architecture. Moreover, that added work may be reflected in added overhead that burdens the the processing for every relationship. I don't care much for the associative object approach because it clutters the IM, but I would prefer it to NOT PARTICIPATING. This is just a readability issue because the real estate in the IM tends to be rather precious. Fortunately for us this is all academic since our case tool uses link/unlink. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) SMALL: Link/Unlink "Dean S. Anderson" writes to shlaer-mellor-users: -------------------------------------------------------------------- Mike Finn wrote: > > > - Consider eliminating NOT PARTICIPATING in favour of > > associative objects > > And also subtypes. > Don't eliminate subtypes! They express a form of relationship that is difficult to represent in any other way. Think of the super / subtype relationship as: "Object X can be and must be related to one and only one of the objects A, B or C" for this model: X | - R1 | _______________________________ | | | | | | A B C This allows you to capture a requirement at the IM level that would otherwise require a detailed look at the STDs and ADFDs to understand. Dean S. Anderson Transcrypt International / EF Johnson Radio Systems (ka0mcm@winternet.com) Subject: Re: (SMU) Levels of abstraction "Dean S. Anderson" writes to shlaer-mellor-users: -------------------------------------------------------------------- I find this discussion to be very interesting also. I believe there are several levels of abstraction within the OOA. The main problem in identifying the levels occurs because the IM actually contains two levels of abstraction. One level is the relationships between objects and the other is the object attribute lists. If you remove the object attribute lists from the IM and call it the "Object Relationship Model (ORM)" and then remove the relationships from the IM and call it the "Data Model (DM)", the level of abstractions become: Level 1 (highest): Object Relationship Model (ORM) Object Communication Model (OCM) Object Access Model (OAM) Level 2: Data Model (DM) State Transition Diagrams (STD) Level 3: (lowest): Action Data Flow Diagrams (ADFD) This follows the concept of Object encapsulating data and process. Dean S. Anderson Transcrypt International / EF Johnson Radio Systems (ka0mcm@winternet.com) Subject: Re: (SMU) SMALL: Link/Unlink Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- Dean S. Anderson wrote: "Dean S. Anderson" wrote: > > Mike Finn wrote: > > > - Consider eliminating NOT PARTICIPATING in favour of > > > associative objects > > > > And also subtypes. > > > > Don't eliminate subtypes! Don't worry, He didn't mean what he wrote. What he meant to write was that NOT PARTICIPATING can be eliminated either by adding associative object or by subtyping the object at the opposite end of the conditional relationship. i.e. one subtype has the referential attribute and is unconditionally related; the other subtype is not related and has no referential attribute. This technique often brings out an abstraction that was missed when the relationship was introduced. I agree with this. I just forgot to say it in my original post. Dave. Not speaking for GPS. -- Dave Whipp, Embedded Systems Group GEC Plessey Semiconductors, Plymouth, United Kingdom. PL6 7BQ. tel. +44 (0)1752 693277 mailto:david.whipp@gpsemi.com fax. +44 (0)1752 693306 http://www.gpsemi.com Subject: Re: (SMU) SMALL: Link/Unlink Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lahman: I think we are having terrible problems communicating. We both seem to thing that the other person keeps on contracticting themselves. Take the following two quotes by Lahman - both from the same email: > Well, we miscommunicated somewhere, because I thought the > "relationship information structure" _was_ the suite of > relational identifiers. > While I agree that referential attributes are essential to > maintaining relational integrity in the model, I do not > consider them to be the definition of the relationship. To my mind, these contractict each other. The first says that the referential attributes store the information about the relationships; the second implies that they don't. I beleive that Lahman's position is that information about "run-time instances" of relationships exist outside the run-time values of referential attributes. (2 notes: I don't agree with the concept of an instance of a relationship; and, when I say run-time, I mean during simulation of OOA, not executionion of generated code). Lahman reads my writing and thinks that I contradict myself: > Finally, your last sentence seems to contradict other statements > you have made. For example, "The write operation is not > overloaded -- it does a simple task -- it sets the value of the > attribute. An implementation may require it to more than > this...." from 1/9/98. Just as Lahman probably does not see any contradiction in what he writes: I do not see any contradiction in what I wrote. I separated the behaviour of the operation in the context of OOA from its behaviour in the context of an implementation. The contradiction implied by this would be the same as if I said: "a constant logic value is an oscillating analogue value." This apparent contradiction is explained by the shift to a different abstraction. Many of our apparent disagreements may be due to a difference in the way we signal a shift of abstraction. The contradiction in Lahman's quotes (above) is explained because he believes that process models and object models exist in different abstractions - a belief I do not share. However, to get back to responding to the post: > I think the answer is that it is a lot less klutzy than using > associative objects. (Though I agree associative objects are > preferable to NOT PARTICIPATING.) Overall, I think the benefits are: > > (1) It is consistent with what the analyst is actually doing in an > action -- activating/deactivating relationships on an individual > basis. This would be true if that is what the analyst is doing. But, as you know, I don't agree. > (2) It is a higher level of abstraction because it unburdens > the analyst with the details of particular identifiers. This > is particularly true when compound identifiers are used. As I have stated previously, I consider the data to be at a higher level of abstraction that link/unlink. (Quick note: no contradiction here because, from my perspective, link/unlink are implementation devices that have been wrongly elevated into the OOA) > (3) The use of relationship references within an action tends > to be more compact and readable. We could probably provide endless examples to support and contradict this. It depends what the action is doing. > (4) It improves the chances of detecting certain types of > analyst errors. Whilst we will have to agree to disagree on the specifics, this statement is so general that it is impossible to disprove. Please allow me to make a counter claim: the use of referential attributes improves the chances of detecting certain types of analyst errors. > (5) It removes the ambiguity of NOT PARTICIPATING for shared > conditional relationships. I don't see NOT PARTICIPATING as ambiguous: just not very nice. > (6) I find the paradigm of navigating to a reference and then > performing relationship operations (e.g., unlink) on it to be > more intuitive, albeit wordier, than than writing a value to > a relational attribute. A subjective point of view. I can't disagree with the statement: I can just state that I find the opposite to be true. > We are poles apart on this one. At the analysis level there are > two reasons for writing to an attribute: you want to store a > value or you want to modify a relationship. Use the term "reason" with care. Whenever you write a value to an attribute, the reason is to store the value. The may be higher level reasons, but these higher level reasons do not consist exclusively of "to modify the relationship" > In the first case the syntax is fairly closely related to the > actual implementation's data store so that one can be fairly > confident that the value will actually be written somewhere. It depends on the implementation, but yes, in most implementations it is not too difficult to link the implementation value to the OOA value. > In the second case the syntax is tenuously related to the actual > implementation so that the write is largely symbolic and one > cannot even count on a value being written. It is more likely that a non-localised storage mechanism will be used for referential attributes than for plain descriptive attributes. However, this is by no means assured. It will generally be possible to write a function (possibly only for debug) that will reconstruct the OOA-domain value from the implementation-domain value. Such functions allow a debugger to observe the behaviour of the implementation from the perspective of the OOA abstraction. Dave. Not speaking for GPS. -- Dave Whipp, Embedded Systems Group GEC Plessey Semiconductors, Plymouth, United Kingdom. PL6 7BQ. tel. +44 (0)1752 693277 mailto:david.whipp@gpsemi.com fax. +44 (0)1752 693306 http://www.gpsemi.com Subject: Re: (SMU) NOT PARTICIPATING Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- Lahman wrote: > If I choose to define an object identifier as some tangible > semantic, such as a temperature value, that places a restriction > on the architecture about how it can implement the data store. > It can no longer, for example, store instances in an array of > structs where the instance identifier is the index in the order > that the instances were added. The architecture now _must_ have > an attribute for the temperature in each instance. It cannot be > optimized out as an array index. [The architecture can still do > the array and maintain referential integrity through indices, > but it will also have to associate that the temperature > attribute with a particular array index if one wants referential > attributes to use the index for performance.] The consession that you make is very important. The arhcitecure _can_ use indexes to identify the objects; and store indexes instead of referential attributes (provided it makes sure everything is kept consistant). This is only possible if the architecture also provides a means of tieing the index to a value. The advantage is that the real value is only rarely needed. When a temperature is passed between accessors; or on an event, the index suffices. If the temerature is passed to an expression that is otherwise constant, the result can be precalculated and associated with the index. Only when a calculation is required is the real floating point value required. Not all these optimisations would actually be benificial. The purpose of architecture (design) is to use reasonable tricks and avoid others. One last point about the temperature: there is freedom in the storage of the value. It is necessary that operations to work correctly; but, beyond that, the gloves are off. If I choose to store a temperature as two integers and a boolean then this may be perfectly valid. You might think that this undermines my argument about NOT PARTICIPATING because I am agreeing that the value must be stored. However, the important point is that the operations must work correctly; and that any means of storage that allows them to work is valid. Mathematically, it is perfectly valid to define a value as the absence of another value in a list. Dave. Not speaking for GPS. -- Dave Whipp, Embedded Systems Group GEC Plessey Semiconductors, Plymouth, United Kingdom. PL6 7BQ. tel. +44 (0)1752 693277 mailto:david.whipp@gpsemi.com fax. +44 (0)1752 693306 http://www.gpsemi.com Subject: Re: (SMU) Levels of abstraction Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > For example, if my STD action description says, "Find a blue > Frimmet", this will translate into a particular set of ADFD > processes that are executed in a particular sequence. These > are determined by the IM and only that set of processes > dictated by the IM exist in the context of the action. One > and only one set of instances will satisfy the STD pseudocode > and they will produce one and only one result (a blue Frimmet). The problem with this paragraph is that, as is all too well known, a natural language (NL) specification is almost always incomplete and ambiguous. There are many potential models that meet it. It is only by constructing the OOA model that you tie down the NL spec. (Your NL spec: "Find a Blue Frimmet" may be unambiguous, but in a wider context you may find that other questions are raised - for example, is there a rule when there is more than one blue frimmet?) The relationship between the spec and potential models is therefore 1:M. Furthermore, artificats will be introduced in the model that are not part of the NL spec. So it becomes 1c:M (possibly 1c:Mc). Of course, you can later rewrite the spec to attempt to produce a 1:1 mapping - but what would be the point? > Here is another point of disagreement. I do not consider an > object to be simply a Name. An object is an entity that > exists regardless of whether I have yet determined what its > attributes are. I think that this is precisely what I stated a couple of paragraphs later in my post. So we don't disagree. It is our conclusions from this that differ. I conclude from this that attributes are part of the same abstaction as the objects. By the same argument, I also conclude that state models and ADFD/SMALL process models are also part of the same abstraction. If attributes, etc., really were part of different abstractions (as you have suggested) then they would not be implicit in the object abstraction. They would be explicity mapped from that other abstraction. > I can also define What an object _is_ in terms of the real world > problem space without knowing its attributes. Moreover, I can > make high level statements about How it is related to other real > entities in the system. And you should also be able to make high level statements about how it interacts with other real-world entities in the system. Dave. Not speaking for GPS. -- Dave Whipp, Embedded Systems Group GEC Plessey Semiconductors, Plymouth, United Kingdom. PL6 7BQ. tel. +44 (0)1752 693277 mailto:david.whipp@gpsemi.com fax. +44 (0)1752 693306 http://www.gpsemi.com Subject: Re: (SMU) SMALL: Link/Unlink smf@cix.compulink.co.uk (Mike Finn) writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lahman... > > I think this is my problem. To my mind, link/unlink can *only* > > change attributes (in instances at the OOA level) and nothing else. > > > > Are you saying data elsewhere is recording the use of link/unlink? > Basically, yes. I think link/unlink represents an higher level of > abstraction for activating relationships that writing relational > attributes. Do you expect to see changes in that part of OOA-of-OOA that represents Process Models? Are there some new objects which capture the use of link/unlink? Could ADFDs be changed to exploit the higher level of abstraction provided by link/unlink? Sorry about all the questions. Mike -- Mike Finn Dark Matter | Email: smf@cix.co.uk Systems Ltd | Voice: +44 (0) 1483 755145 Subject: Re: (SMU) NOT PARTICIPATING smf@cix.compulink.co.uk (Mike Finn) writes to shlaer-mellor-users: -------------------------------------------------------------------- >From the SMALL paper (text in [...] is mine): There are two kinds of attributes whose value is architecture-dependent: o Referential Attributes o Identifying Attributes of type Arbitrary The values of the these attributes are therefore meaningless in the OOA models [including ADFDs], and therefore may not be accessed directly as described here. Just a few comments on the above before I get to my point. Firstly, the second "therefore" does not follow. SMALL could access these attributes because ADFDs already do! Its seems to be a justification for the use of link/unlnk. Secondly, these attributes are meaningless in the sense of being unknowable. Comparisons on these attributes (in OOA) are vaild, even if you don't know the actual values being compared. Thirdly, I think this quote marks a subtle change in OOA. All referential attribute values are now unknowable in the OOA. Looking at the tables on page 18 of the SMALL paper: The values for the referential attribute Bench.Cave cannot now be filled in without knowing what values the Architecture will choose. Therefore, NOT PARTICIPATING is not a vaild concept in OOA because no value can be associated with any referential attribute. Given a relationship of the 1c:M sort in the analysis, where there are unmapped instances on the many side: You may ask what value shall I put in the referential attribute column for the unmapped instances? The answer is: This is a question you cannot ask! Finally, the implication of the above (and this is my point) is that it is NOT required to add Associative Objects or Subtype Objects where there are unmapped instances, in relationships such as 1c:M. These extra objects appear as a result of asking the wrong question. Mike -- Mike Finn Dark Matter | Email: smf@cix.co.uk Systems Ltd | Voice: +44 (0) 1483 755145 Subject: Re: (SMU) SMALL: Link/Unlink smf@cix.compulink.co.uk (Mike Finn) writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Dean S. Anderson... > > > > > - Consider eliminating NOT PARTICIPATING in favour of > > > associative objects > > > > And also subtypes. > > > Don't eliminate subtypes! Whoops! Although I would like to see a few things removed from OOA, rest assured, subtypes are not one of them! Many thanks to Dave Whipp for replying before I could. Mike -- Mike Finn Dark Matter | Email: smf@cix.co.uk Systems Ltd | Voice: +44 (0) 1483 755145 Subject: Re: (SMU) SMALL: Link/Unlink smf@cix.compulink.co.uk (Mike Finn) writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Mike Morrin... > >> It would be my preference if this 'hidden' attribute could be tested, > >> independent of the referential, making the NOT PARTICIPATING value > >> redundant. To clarify, I would propose that before reading the referential > >> attribute of a conditional relationship, you would first test to see that > >> the relationshiop is linked. When I posted my last reply I decided that your motivation was to increase Architecture performance. This I now think was a mistake on my part. > >Link/unlink operate in OOA, while what you describe above could be > >considered for implementation in the Architecture. > No, I was referring to an analysis concept NOT implementation. I regret > calling it a 'hidden attribute', as it is not really hidden, nor an attribute. It is > information which does not appear in the OIM, but is (or should be) > visible in the behavioural description of the system, as is the cardinality > of instances of an object (yes I suppose I am describing cardinality of > instances of a relationship). I understand about the information not appearing on the OIM, but can't quite grep the rest. > Pehaps the point I was really trying to make is that I REALLY don't like > NOT PARTICIPATING as a concept, and I would prefer to make it illegal > (even a runtime error) to read a referential attribute and get that value. I > think that analysts should be forced to test for and deal with the special > case before reading the referential. Mike -- Mike Finn Dark Matter | Email: smf@cix.co.uk Systems Ltd | Voice: +44 (0) 1483 755145 Subject: Re: (SMU) Levels of abstraction lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Anderson... > I find this discussion to be very interesting also. I believe there > are several levels of abstraction within the OOA. The main problem > in identifying the levels occurs because the IM actually contains > two levels of abstraction. One level is the relationships between > objects and the other is the object attribute lists. If you remove > the object attribute lists from the IM and call it the "Object > Relationship Model (ORM)" and then remove the relationships from the > IM and call it the "Data Model (DM)", the level of abstractions > become: > > Level 1 (highest): > Object Relationship Model (ORM) > Object Communication Model (OCM) > Object Access Model (OAM) > > Level 2: > Data Model (DM) > State Transition Diagrams (STD) > > Level 3: (lowest): > Action Data Flow Diagrams (ADFD) > > This follows the concept of Object encapsulating data and process. Though I have been advocating different levels of abstraction in the OOA and I agree with your assessment above, I would caution that too much might be read into this. For example, I don't think different diagrams are needed for your ORM and DM. Generally one wants to look at both types of information at the same time, especially when doing the lower level stuff. The ERD format is very handy for this. While I think the distinction is sometimes useful, it tends to only be relevant when Whipp and I wander off into one of these angels-on-the-head-of-a-pin dissertations. A lot of useful analysis can be done without worrying about whether attributes are at a different level of abstraction than objects or relationships. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) Levels of abstraction lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > > For example, if my STD action description says, "Find a blue > > Frimmet", this will translate into a particular set of ADFD > > processes that are executed in a particular sequence. These > > are determined by the IM and only that set of processes > > dictated by the IM exist in the context of the action. One > > and only one set of instances will satisfy the STD pseudocode > > and they will produce one and only one result (a blue Frimmet). > > The problem with this paragraph is that, as is all too > well known, a natural language (NL) specification is almost > always incomplete and ambiguous. There are many potential > models that meet it. It is only by constructing the OOA > model that you tie down the NL spec. True, NL can be ambiguous. However, there is nothing to prevent me from defining a high level description language that will be unambiguous. > (Your NL spec: "Find a Blue Frimmet" may be unambiguous, > but in a wider context you may find that other questions > are raised - for example, is there a rule when there is > more than one blue frimmet?) > > The relationship between the spec and potential models is > therefore 1:M. Furthermore, artificats will be introduced > in the model that are not part of the NL spec. So it > becomes 1c:M (possibly 1c:Mc). Of course, you can later > rewrite the spec to attempt to produce a 1:1 mapping - but > what would be the point? If "Find a blue Frimmet" happens to be unambiguous, then I don't see that this conclusion follows. Given the rules of OOA and what is in the IM, then I think there is going to be exactly one way to model this in an action. (Though there may be be lots of ways do implement that action in the RD.) My only freedom in the action will be cosmetic things like using a single accessor to get two attributes from a data store. > > Here is another point of disagreement. I do not consider an > > object to be simply a Name. An object is an entity that > > exists regardless of whether I have yet determined what its > > attributes are. > > I think that this is precisely what I stated a couple of > paragraphs later in my post. So we don't disagree. It is > our conclusions from this that differ. I conclude from > this that attributes are part of the same abstaction as > the objects. By the same argument, I also conclude that > state models and ADFD/SMALL process models are also part > of the same abstraction. > > If attributes, etc., really were part of different > abstractions (as you have suggested) then they would > not be implicit in the object abstraction. They would > be explicity mapped from that other abstraction. Ah! I am not asserting that they are different abstractions. I am asserting that they are a different _level_ of abstraction. I agree that objects and their attributes are part and parcel of the underlying entity. I am simply asserting that they are views of the same entity at different levels of abstraction (i.e., that reflect different levels of detail). > > > > I can also define What an object _is_ in terms of the real world > > problem space without knowing its attributes. Moreover, I can > > make high level statements about How it is related to other real > > entities in the system. > > And you should also be able to make high level statements about > how it interacts with other real-world entities in the system. What conclusion am I missing here? It seems to me you just repeated the last sentence with slightly different phrasing. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) NOT PARTICIPATING lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... Regarding my tangible temperature identifier: > The consession that you make is very important. The arhcitecure > _can_ use indexes to identify the objects; and store indexes > instead of referential attributes (provided it makes sure > everything is kept consistant). > > This is only possible if the architecture also provides a means > of tieing the index to a value. The advantage is that the real > value is only rarely needed. When a temperature is passed > between accessors; or on an event, the index suffices. > If the temerature is passed to an expression that is > otherwise constant, the result can be precalculated and > associated with the index. Only when a calculation is required > is the real floating point value required. > > Not all these optimisations would actually be benificial. > The purpose of architecture (design) is to use reasonable > tricks and avoid others. > > One last point about the temperature: there is freedom in the > storage of the value. It is necessary that operations to work > correctly; but, beyond that, the gloves are off. If I choose > to store a temperature as two integers and a boolean then > this may be perfectly valid. > > You might think that this undermines my argument about NOT > PARTICIPATING because I am agreeing that the value must be > stored. However, the important point is that the operations > must work correctly; and that any means of storage that > allows them to work is valid. Mathematically, it is perfectly > valid to define a value as the absence of another value in a > list. The operations have always been the prerogative of the RD. My issue in this thread has simply been that as soon as there is an explicit value, the architecture must store it somehow. Just as indexing structs requires a data store to tie the index to a specific temperature, so there has to be some kind of data store to describe the NOT PARTICIPATING value. Though I think it is quite a stretch, I agree that a value can be described by its absence from a list -- but you still need the list of other explicit values for it to be absent from. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) SMALL: Link/Unlink lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Finn... > > Basically, yes. I think link/unlink represents an higher level of > > abstraction for activating relationships that writing relational > > attributes. > > Do you expect to see changes in that part of OOA-of-OOA that > represents Process Models? Are there some new objects which > capture the use of link/unlink? Could ADFDs be changed to > exploit the higher level of abstraction provided by link/unlink? I believe ADFDs could easily be modified to do link/unlink/reference. At most this would require the definition of some new, specialized processes (as opposed of "objects") and some rules about when you can use references. However, I don't see this happening. This is an _alternative_ way of doing things. If you do this, then you can't do direct writes of referential attributes. (You might be able to, but I wouldn't want to be stuck with the task to keep things straight in the architecture.) The existing ADFDs seem to work OK, with the minor glitch of NOT PARTICIPATING for shared identifiers for temporally conditional relationships. So why change the paradigm? -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) SMALL: Link/Unlink lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > I think we are having terrible problems communicating. We > both seem to thing that the other person keeps on > contracticting themselves. Can't argue with that. However, I think with this message the true core disagreement gets crystallized. What I think it comes down to is very different views of what the analyst is actually doing (i.e., the analysis issues that the analyst is thinking about) when doing a write to a referential attribute. > Take the following two quotes by Lahman - both from the > same email: > > > Well, we miscommunicated somewhere, because I thought the > > "relationship information structure" _was_ the suite of > > relational identifiers. > > > While I agree that referential attributes are essential to > > maintaining relational integrity in the model, I do not > > consider them to be the definition of the relationship. > > To my mind, these contractict each other. The first says that > the referential attributes store the information about the > relationships; the second implies that they don't. I beleive > that Lahman's position is that information about "run-time > instances" of relationships exist outside the run-time > values of referential attributes. (2 notes: I don't agree > with the concept of an instance of a relationship; and, > when I say run-time, I mean during simulation of OOA, not > executionion of generated code). The problem here is that the context of my first paragraph was responding to an assertion you made about misinterpreting "relationship information structure". I was describing what I had previously understood to be what _you_ meant when you used that phrase. Given that, I do not agree with you interpretation of the second statement, which does reflect what I believe. If you recall the context, I feel the definition of a relationship is at the object level, not the instance level. The relationship definition is the What and How of the relationship. The relational identifiers define Which specific instances are related and I consider that to be a lower level of abstraction. At best the relational identifiers define a single, orthogonal aspect of a relationship. > Lahman reads my writing and thinks that I contradict myself: > > > Finally, your last sentence seems to contradict other statements > > you have made. For example, "The write operation is not > > overloaded -- it does a simple task -- it sets the value of the > > attribute. An implementation may require it to more than > > this...." from 1/9/98. > > Just as Lahman probably does not see any contradiction in what he > writes: I do not see any contradiction in what I wrote. I separated > the behaviour of the operation in the context of OOA from its > behaviour in the context of an implementation. > > The contradiction implied by this would be the same as if I > said: "a constant logic value is an oscillating analogue > value." This apparent contradiction is explained by the > shift to a different abstraction. > > Many of our apparent disagreements may be due to a difference > in the way we signal a shift of abstraction. The contradiction > in Lahman's quotes (above) is explained because he believes > that process models and object models exist in different > abstractions - a belief I do not share. First, I worry about your choice of the phrase "different abstractions". I believe there are different levels of abstraction, but I think those levels are simply different views of the same abstraction. At different points in the OOA analysis the analyst is interested in different views or levels of detail or the same fundamental thing.While I agree with this last paragraph, I don't think it addresses the apparent contradiction. Regardless of levels of abstraction, at the moment one writes a relational attribute one is NEVER simply writing an attribute. At that moment (I would say, at the level of abstraction of the state action) one is activating or deactivating a particular relationship. The write is nothing more than a notational artifact (albeit with some interesting implications, or we wouldn't be in this thread). Where I see the contradiction is that you seem to be denying this by contending that the analyst is merely thinking about putting a specific value in a relational attribute and that the relationship activation is some byproduct that the architecture will take care of. > However, to get back to responding to the post: > > > I think the answer is that it is a lot less klutzy than using > > associative objects. (Though I agree associative objects are > > preferable to NOT PARTICIPATING.) Overall, I think the benefits > are: > > > > (1) It is consistent with what the analyst is actually doing in an > > action -- activating/deactivating relationships on an individual > > basis. > > This would be true if that is what the analyst is doing. But, > as you know, I don't agree. OK, then this is a fundamental disagreement. > > (2) It is a higher level of abstraction because it unburdens > > the analyst with the details of particular identifiers. This > > is particularly true when compound identifiers are used. > > As I have stated previously, I consider the data to be at a > higher level of abstraction that link/unlink. > > (Quick note: no contradiction here because, from my perspective, > link/unlink are implementation devices that have been wrongly > elevated into the OOA) This is a corollary to the disagreement above. When I am activating a relationship at the action level, I should not have to be concerned with the details of things like how many identifiers there are are. I am thinking in terms of relating This Here Instance to That There Instance. (At some point I may have to be concerned with identifiers just so I can somehow get the reference that I need, but in many cases I get them through navigation from other instances. In those cases I couldn't care less what the specific identifiers are.) > > (3) The use of relationship references within an action tends > > to be more compact and readable. > > We could probably provide endless examples to support and > contradict this. It depends what the action is doing. I am speaking based upon my personal experience having done it both ways on real projects. > > (4) It improves the chances of detecting certain types of > > analyst errors. > > Whilst we will have to agree to disagree on the specifics, > this statement is so general that it is impossible to disprove. > > Please allow me to make a counter claim: the use of referential > attributes improves the chances of detecting certain types of > analyst errors. But it can be demonstrated to be true, which I did. I agree that when something is demonstrably true, it is tricky to disprove it. > > (5) It removes the ambiguity of NOT PARTICIPATING for shared > > conditional relationships. > > I don't see NOT PARTICIPATING as ambiguous: just not very nice. If I have two conditional relationships sharing an identifier and I want to remove only one, how do I unambiguously do that? The only way to do so is to use separate identifiers in the IM, which loses the important fact that they should always be the same. > > We are poles apart on this one. At the analysis level there are > > two reasons for writing to an attribute: you want to store a > > value or you want to modify a relationship. > > Use the term "reason" with care. Whenever you write a value > to an attribute, the reason is to store the value. The may be > higher level reasons, but these higher level reasons do not > consist exclusively of "to modify the relationship" And this is the crux of our disagreement showing up again. When I am writing to a relational identifier I cannot see the analysis context to be anything else but a desire to modify a relationship. In fact I can be virtually certain that I am not storing a value. In this context I see the concept of storing a value is an implementation issue, not an analysis issue. > > In the first case the syntax is fairly closely related to the > > actual implementation's data store so that one can be fairly > > confident that the value will actually be written somewhere. > > It depends on the implementation, but yes, in most implementations > it is not too difficult to link the implementation value to the > OOA value. > > > In the second case the syntax is tenuously related to the actual > > implementation so that the write is largely symbolic and one > > cannot even count on a value being written. > > It is more likely that a non-localised storage mechanism will > be used for referential attributes than for plain descriptive > attributes. However, this is by no means assured. > > It will generally be possible to write a function (possibly only > for debug) that will reconstruct the OOA-domain value from the > implementation-domain value. Such functions allow a debugger > to observe the behaviour of the implementation from the > perspective of the OOA abstraction. Yes, this may be possible. However, I don't think it is relevant to the point. The issue is that writing to a relational attribute is a largely symbolic mechanism that really has nothing to do with data stores in an OOA (unless you really are doing an OOA of an RDBMS). Again, in the analysis context in which one does this syntactical maneuver, the analyst will be concerned with modifying relationships rather than storing data. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: (SMU) OMG Action Language Steve Mellor writes to shlaer-mellor-users: -------------------------------------------------------------------- Everyone: Here is some news we thought you might be interested in. Last month, I made a presentation to the Object Management Group (OMG) Object-Oriented Analysis and Design Task Force (OOA&D TF) on _Action Language_. Essentially, I proposed that the OMG make a request for proposal for a: UML-compatible, software-platform-independent, executable, action language. By UML-compatible, we mean that it should be usable with UML. By software-platform-independent, we mean that the language should be independent of the software architecture (a term we chose not to use, given widespread misunderstanding of our usage). Our goal is to have a translatable action language as an OMG adopted technology. We are now in the process of writing a request for proposal (RFP). We have set up a Hidden URL (HURL) for E-SMUG's use that has a copy of the presentation: http://www.projtech.com/esmug/hurl.html We don't think you'll find the presentation very surprising, but we do hope to keep you up-to-date. As ever, we're interested in your comments. BTW, I have not forgotten your comments on SMALL. They are very helpful. I'm just overwhelmed at the moment. Be talking to you soon. -- steve mellor Subject: Re: (SMU) SMALL: Link/Unlink "Lynch, Chris D. SD" writes to shlaer-mellor-users: -------------------------------------------------------------------- If I could be so bold... There has been much traffic on the above subject and the main thrust of the arguments seems to be essentially this: one of you (Whipp) appears to be arguing that the link brothers, in the shared relational identifier situation, is at least a fly in the OOA ointment - - unnecessary, non-orthogonal and departing from the spirit of the method ("Relationality", to coin a word.) The other (Lahman) is arguing that, OK, it's not necessary, but it sure is nice, I (and others) think in link and unlink, and besides, the spirit of the method in this area is not well-defined. I think it would be very helpful to have PT address Whipp's point that the the OOA gears seem to grind a bit in the situations he refers to, and to answer what I believe to be his prior assertion that a fundamental aspect of the method (i.e. its relational character) is being fundamentally and intentionally altered by the introduction of link/unlink. In short, there are good points on both sides, but I think input from the originators would be timely. Chris Lynch Abbott AIS San Diego, CA Subject: Re: (SMU) SMALL: Link/Unlink lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lynch... > There has been much traffic on the above subject and the main thrust > of > the arguments seems to be essentially this: one of you (Whipp) appears > > to be arguing that the link brothers, in the shared relational > identifier situation, is at least a fly in the OOA ointment - - > unnecessary, non-orthogonal and departing from the spirit of the > method > ("Relationality", to coin a word.) The other (Lahman) is arguing > that, > OK, it's not necessary, but it sure is nice, I (and others) think in > link and unlink, and besides, the spirit of the method in this area is > > not well-defined. I have already redefined Whipp's position enough, so I will only refine mine. I believe link/unlink is not necessary in the sense that it is an alternative mechanism to relational attribute writes; you can use either one with equivalent rigor and results. But you have to use one of them. I don't think the spirit of the method is ill-defined in this area. It seems crystal clear to me. B-) (I may think NOT PARTICIPATING was ill-advised, but that is a detail that probably seemed like a good idea at the time.) I believe the spirit of the method in this area is to provide a rigorous notation that allows relational integrity to be modeled and enforced. I believe either approach will accomplish this. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) Levels of abstraction "Dean S. Anderson" writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > > lahman writes to shlaer-mellor-users: > -------------------------------------------------------------------- > [...] > Though I have been advocating different levels of abstraction in the OOA > and I agree with your assessment above, I would caution that too much > might be read into this. For example, I don't think different diagrams > are needed for your ORM and DM. Generally one wants to look at both > types of information at the same time, especially when doing the lower > level stuff. The ERD format is very handy for this. > [...] I also agree that you don't really need to have different diagrams (though tool support to look graphically at just objects and relationships could be handy in large models). My point was just the mixing of levels of abstraction that is present in the current diagrams can hide what the real levels of abstraction are. Dean S. Anderson Transcrypt International / EF Johnson Radio Systems ka0mcm@winternet.com Subject: (SMU) Experience with Real-Time Jim Armour writes to shlaer-mellor-users: -------------------------------------------------------------------- Has anyone out there got any experience in applying SM to real-time embedded and DSP systems? Mail me off the list if you prefer. Jim Armour. -- Motorola GPD, 16 Euro Way, Tel No +44/0 1793 565695 Blagrove, Swindon, Fax No +44/0 1793 541228 England, SN5 8YQ. mailto:armourj@ecid.cig.mot.com Subject: Re: (SMU) SMALL: Link/Unlink smf@cix.compulink.co.uk (Mike Finn) writes to shlaer-mellor-users: -------------------------------------------------------------------- I think I'm coming to some sort of conclusion about link/unlink and their place in the scheme of things. I had agreed with Lahman when he said (referring to link/unlink): > (2) It is a higher level of abstraction because it unburdens > the analyst with the details of particular identifiers. This > is particularly true when compound identifiers are used. But had also strongly agreed with Whipp when he said: > (Quick note: no contradiction here because, from my perspective, > link/unlink are implementation devices that have been wrongly > elevated into the OOA) The question to me was: Are link/unlink at higher level of abstraction than what's on an ADFD or are they an implementation mechanism that belongs in the Architecture domain? The SMALL paper states that an action language (as with an ADFD) is a form of a more general concept, the Process Model. This idea is not new, STDs and STTs are both used to represent State Models. In practical terms, the output from a SMALL compilation would go to populate that part of the OOA-of-OOA that deals with the Process Model (it should NOT generate source code). Similarly, the output from an ADFD "conversion" goes to populate the same part of the OOA-of-OOA model. Since the two outputs must be identical in nature, I now think the use or concept of link/unlink cannot go any further than the SMALL compiler itself and that link/unlnk are merely an artifact of the SMALL action language domain; in the same way as the position (x,y) of a bubble on a diagram is a property of an ADFD and is not relevant in the OOA-of-OOA. My other conclusions are that it must be up to the SMALL compiler to convert calls to the link/unlink operators to the appropriate reads and writes to referential attributes. And that the manipulation of a relationship with link/unlink in an action language has absolutely no impact on the way the Architecture goes about its business. Having said all this, I still prefer ADFDs over any action language I've seen so far! Mike -- Mike Finn Dark Matter | Email: smf@cix.co.uk Systems Ltd | Voice: +44 (0) 1483 755145 Subject: Re: (SMU) SMALL: Link/Unlink Steve Mellor writes to shlaer-mellor-users: -------------------------------------------------------------------- At 08:03 AM 1/19/98 -0600, you wrote: >"Lynch, Chris D. SD" writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >If I could be so bold... > >There has been much traffic on the above subject and the main thrust of >the arguments seems to be essentially this: one of you (Whipp) appears >to be arguing that the link brothers, in the shared relational >identifier situation, is at least a fly in the OOA ointment - - >unnecessary, non-orthogonal and departing from the spirit of the method >("Relationality", to coin a word.) The other (Lahman) is arguing that, >OK, it's not necessary, but it sure is nice, I (and others) think in >link and unlink, and besides, the spirit of the method in this area is >not well-defined. > >I think it would be very helpful to have PT address Whipp's point that >the the OOA gears seem to grind a bit in the situations he refers to, >and to answer what I believe to be his prior assertion that a >fundamental aspect of the method (i.e. its relational character) is >being fundamentally and intentionally altered by the introduction of >link/unlink. > >In short, there are good points on both sides, but I think input from >the originators would be timely. > >Chris Lynch >Abbott AIS >San Diego, CA > Hello everyone: First, let me thank everyone for their contributions to this thread and others regarding SMALL. They do make a difference. On to Chris' question. Essentially, our reasoning went like this: * The OIM defines the referential integrity rules; the process model must conform to them. In other words, at the end of the action the referential constraints must still hold--no matter how this was achieved. Note that this must be true whether we write in ADFDs, SMALL, an existing action language or code. We are therefore free to do it either way (with data values or with 'instance references') * The method uses the relational model for three fundamental reasons - guidance in partitioning. You can know when you're done. - it is a way to express analysis information non-redundantly (ie 'normalization' in the analysis sense). Note that this 'non-redundant' 'normalization' property extends to *behavior* - you can translate the relational view of data mathematically into whate'er appropriate data structure you want during the design (ie we can do joins/etc on the data structure at translation time and know that you did it correctly) Note that this, too, extends to behavior--we can imagine translators that 'join' the state model of a customer its accounts. These three reasons allow either approach in the process model-- the PM provides a spec. for the operations that must be conformed to in the code. * The existing S-M action languages use link/unlink and relate/unrelate. That seems to be acceptable to everyone. (BTW, we chose the former pair because they're shorter.) * We needed to raise the level of abstraction of relationship traversal so that access across multiple objects is a single operation (called out in the introductory sections). * Link/unlink are closer to how others in the OO world view instances. Using the relational model has always been a problem for the method because of the 'not OO' problem. Of course, _we_ all know that the packaging of objects in the analysis is not the same as in the implementation, but that cuts no ice if you've been told that S-M is 'not OO'. Using the Link Brothers does not appear to stand in the way of translation, and removing arbitrary differences is a Good Thing in these days of unification. * The Link Brothers sure are convenient. None of these points, we believe, _require_ instance references, but they head in that direction. In the end, it's the first point that is the key. It was this view that led to the statement (much regretted ;) that "the referential attributes are meaningless in the model." Of course, they _do_ have meaning--they refer to other instances in the model. Does this mean that we're abandoning the relational model for data organization and partitioning? Absolutely not! However, Dave Whipp raises a related and important point: the 'orthogonality' of the language. For now that there are instance references and data values, one ends up having to decide which kinds of processes work with what. That is, the Link Brothers and Gen act on instance references, while ComputeSquareRoot does not. While we don't necessarily agree that having types of process is necessarily worse than adding types to dataflows, it's a very good point. His proposal of typed data flows deserves (and will get) further study. Again, we don't believe that these points inexorably require the Link Brothers. But neither do the Brothers, IOO, break the relationality of the method. -- steve mellor Subject: (SMU) Execution and Translation Steve Mellor writes to shlaer-mellor-users: -------------------------------------------------------------------- Everyone: For execution and translation to succeed, there has to be a definition of the semantics of the models that we build. The Unified Modeling Language is being accepted--whether we like it not--as a standard notation. Unfortunately, the semantics of the UML state charts are somewhat ambiguous. As a consequence, we can't use UML notation (again, it's being accepted as a standard whether we like it or not) as the basis for an executable and translatable model. If we are to bring translation to the UML masses, we have to make it so that the UML Statechart is unambiguous (tho' we don't need all that superstructure), and we need to have an Action Language. For all these reasons, we have been working with the OMG on action language AND we have been trying to raise the importance of defined execution semantics for the statecharts. Therefore, I have been discussing UML statecharts in the OTUG forum. Yesterday, Grady Booch & I have discussed the issues off-line. The following email, posted to OTUG, summarizes our results. I thought you would all enjoy reading it. -- steve > >Everyone: > >Grady and I have talked privately, and have agreed >to work together to identify the gaps or ambiguities >in the semantics of the UML's statecharts that have >been the subject of discussion here. We will then >formally present these to the OMG RTF for consideration >and resolution. Along the way, we intend to draw in >some of the other participants in this discussion, >so that we can all take the positive steps of >advancing the practice of the standard. > >-- steve mellor and grady booch > >PS. On a personal note, I would like to take this >opportunity to thank Steve Tockey for drawing >my attention to the issues here. Thanks! -- steve > Subject: Re: (SMU) SMALL: Link/Unlink lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Finn... > The question to me was: Are link/unlink at higher level of > abstraction than what's on an ADFD or are they an implementation > mechanism that belongs in the Architecture domain? > > The SMALL paper states that an action language (as with an ADFD) is > a form of a more general concept, the Process Model. This idea is > not new, STDs and STTs are both used to represent State Models. > > In practical terms, the output from a SMALL compilation would go to > populate that part of the OOA-of-OOA that deals with the Process > Model (it should NOT generate source code). Similarly, the output > from an ADFD "conversion" goes to populate the same part of the > OOA-of-OOA model. This is off the point, but I don't think I agree with this sense of "compile". There are only two reasons to "compile" an OOA: to automate a model simulation or to translate the models into an implementation. In either case I see the OOA-of-OOA as simply another (perhaps implicit) input to the compilation process rather than something that gets "populated". The OOA-of-OOA merely provides contextual constraints on the "compilation". Put another way, I would think the OOA itself is a population of specific instances of an OOA-of-OOA. > Since the two outputs must be identical in nature, I now think the > use or concept of link/unlink cannot go any further than the SMALL > compiler itself and that link/unlnk are merely an artifact of the > SMALL action language domain; in the same way as the position (x,y) > of a bubble on a diagram is a property of an ADFD and is not > relevant in the OOA-of-OOA. > > My other conclusions are that it must be up to the SMALL compiler > to convert calls to the link/unlink operators to the appropriate > reads and writes to referential attributes. And that the > manipulation of a relationship with link/unlink in an action > language has absolutely no impact on the way the Architecture goes > about its business. I was pretty much OK up to the last paragraph. I see link/unlink as an alternative notational representation to relational attribute read/write. To see this, think about how you would do an ADFD using link/unlink _rather than_ referential read/write. I could convert an ADFD to use link/unlink quite easily -- essentially all I have to do is define two new processes, link and unlink. The resulting ADFD would look very much the same as the relational read/write ADFD. This new version of the ADFD would do exactly the same things as the old one and it would have to enforce the exactly the same rules for relational integrity that apply to the referential read/write accessors. The idea that link/unlink simply write the referential attributes implies that referential attributes are real data in the problem space. I believe that viewing them that way is misleading. An OOA is not an RDBMS ERD and the application objects are not RDBMS tables. The referential attributes are a symbolic notation that embodies the necessary rules to preserve referential integrity so that the relational model provides underlying rigor for the OOA. But I see this as mostly a behind-the-scenes theoretical basis for the notation. Note that the relational attributes do not go away when one uses link/unlink; they are still in the IM and they will still determine what the architecture has to support when those processes are executed. But link and unlink are just processes the same as when a relational identifier read/write is done with read/write accessors. The relational read/write accessors are special because they may not write to data stores and they will usually have other processing associated with them in the architecture to enforce referential integrity. That special processing is exactly what the link/unlink processes have associated with them, so there is no need for the indirection of writing to referential attributes. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) SMALL: Link/Unlink Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Steve Mellor: Thank you, Steve, for your comments. The way I read them: the theoretical case for the Link Brothers is not proven, but the pragmatic/marketing case is very strong. I can agree that there is no reason why a set of rules cannot be constructed that ensure link/unlink can be used correctly within the relational model. There are some difficulties, but these are much less than those caused by the attempt to crowbar the SM analysis method into the UML design notation - and that project seems to be going full steam ahead. > The existing S-M action languages use link/unlink and > relate/unrelate. That seems to be acceptable to everyone. Mike Finn has made the point that ADFDs and SMALL must be different, equivalent, notations for the same formalism. I completely agree. ADFDs currently use referential attribute accesses, so either either these must be changed, or the formalism must be defined in a way that admits both styles. In this latter case, it should be possible to define an action language that uses referential attributes. This could use SMALL syntax. I think this position could keep everyone happy. However, if the formalism becomes more complex as a result, then people writing translation engines will be inconvenienced. Many optimisations rely on template matching on the model. If there's more than one way to specify something (at the level of the formalism) then these templates become more complex. > The Link Brothers sure are convenient. This may be true. I have never used them, nor desired to use them. Perhaps if I did, then I would find them convenient. I do agree that relationship navigation syntax can be useful. > However, Dave Whipp raises a related and important point: > the 'orthogonality' of the language. For now that there are > instance references and data values, one ends up having > to decide which kinds of processes work with what. That is, > the Link Brothers and Gen act on instance references, while > ComputeSquareRoot does not. The non-othogonality is also expressed in the fact that some attributes are readable, and others aren't. If I want to read the value of a referential attribute then I have to follow the links all the way to the identifier which provides the value. > While we don't necessarily agree > that having types of process is necessarily worse than adding > types to dataflows, it's a very good point. His proposal of > typed data flows deserves (and will get) further study. I think it is important to be clear just what I was proposing. I didn't propose adding types to dataflows: I pointed out that the information already implicitly exists - needing only to be extracted during RD. Consider the following pseudo code: Find an instance A: get its referential attribute: ref_attr. Generate an event: B1(ref_attr; ...) >From an OOA perspective, this is a data-oriented approach. But looked at from the point of view of an architecture that uses links, the following facts may be derived (with no extra notation): . The value of the referential attribute is obtained by following a link from A to B and dereferencing. . The instance to which the event is generated is the same as the one that was just dereferenced to get the value of ref_attr. => Therefore the dereference can be optimised out. This optimisation could be gleamed as an optimisation template, but that is unnecessarily complex (there are many other situations where dereferencing can be optimised out). Instead, it is possible to place an intermediate tag on the dataflow in the translation process, Whenever a value can be associated with a reference, the reference can be tagged onto a dataflow: When a process that can use a reference reads a dataflow that is tagged by the reference, then it can use the reference. Otherwise it must use the value and invoke an implicit (architectural) search accessor to find its reference. So far, this proposal has no impact upon the notation nor the formalism. I am only pointing out the richness of the infomation that is already available in the model. However, a syntax that allows compound attributes to be read may simplify the use of referential attributes: Dog(one).[id] > ~dog_id; Dog(~dog_id).[R1.owner] > ~owner_id; (~owner_id) > (~name, ~address); I have also suggested in previous posts that the use of "~" for both composed and atomic flows can lead to confusion, as can mapping by position. It may be better to use: Dog(one).[id] > %dog_id; Dog(%dog_id).[R1.owner] > %owner_id; %owner_id > (~name, ~address); or %owner_id > (address => ~addr, name => ~name); Dave. Not speaking for GPS. -- Dave Whipp, Embedded Systems Group GEC Plessey Semiconductors, Plymouth, United Kingdom. PL6 7BQ. tel. +44 (0)1752 693277 mailto:david.whipp@gpsemi.com fax. +44 (0)1752 693306 http://www.gpsemi.com Subject: Re: (SMU) SMALL: Link/Unlink lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Mellor... > However, Dave Whipp raises a related and important point: > the 'orthogonality' of the language. For now that there are > instance references and data values, one ends up having > to decide which kinds of processes work with what. That is, > the Link Brothers and Gen act on instance references, while > ComputeSquareRoot does not. While we don't necessarily agree > that having types of process is necessarily worse than adding > types to dataflows, it's a very good point. His proposal of > typed data flows deserves (and will get) further study. I guess I am missing some subtlety here, but I thought ADFD processes were already typed! Tests and transforms are different than data store accessors and all are different than event generators. In addition there are several flavors of data store accessors if one includes create and delete in this group. I would also argue that a write accessor for a relational attribute is implicitly different from an ordinary data attribute write accessor, though that difference can be derived from the IM. No matter how abstractly one wants to think of "data", an event generator's output is clearly different than a transform's so they have to be different types of process. It seems to me that the only new things introduced by the LBs are data flow restrictions: that data of type Object Reference can only be tested for equality, is read-only, and cannot go into or out of a transform process. But I think this already existed as well, at least implicitly. For example, it seems to me that passing a relational identifier to a transform is rather Bad Form unless that identifier also had a tangible semantic in the problem space because one should not be able to perform random operations on pure relational identifiers. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: (SMU) SMALL: Link/Unlink - OOA of OOA Mike Morrin writes to shlaer-mellor-users: -------------------------------------------------------------------- In following the link brothers thread, I have repeatedly seen statements that an action language utilising the link brothers is the same as an action language utilising direct writing of referential attributes is the same as an ADFL. I am not sure that this has been adequately demonstrated to be true. It would be nice if it were, but I don't think that it is. I go along with the view that the action language should not (and in the cases under discussion does not) change the OOA of Object Information Modelling, but it seems to me that the syntax used to capture system behaviour is directly related to the OOA of Behaviour Modelling (which I presume to be part of the OOA of OOA). An example of how this change might flow through to the translation process: If we have an action language which does not allow direct writing of referential attributes, using the link brothers instead, then the syntax of the action language will guarantee that the value written to the referential is of the correct datatype and no runtime checking is needed. If, however, our action language allows us to link a relationship by writing the referential attribute, then it seems that run-time checking may be required to determine that the referential is of type "Identifying Attribute Set of Related Object". If the above example is watertight, it would weem to imply that the choice of behavior modelling syntax flows through the OOA of OOA, the architecture, and indeed the target executable. So, is the OOA of OOA dependent on the choice of behaviour modelling syntax? If so, how is it proven that there is only one OOA of OOA? regards, Mike Subject: Re: (SMU) SMALL: Link/Unlink Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > The idea that link/unlink simply write the referential attributes > implies that referential attributes are real data in the problem > space. I believe that viewing them that way is misleading. An > OOA is not an RDBMS ERD and the application objects are not RDBMS > tables. The referential attributes are a symbolic notation that > embodies the necessary rules to preserve referential integrity so > that the relational model provides underlying rigor for the OOA. > But I see this as mostly a behind-the-scenes theoretical basis > for the notation. The idea that the problem space is an RDBMS is (in the general case) obviously false. But this doesn't prevent a table based model of that problem space being constructed. Furthermore, having constructed that model, every item of information in the tables can mapped to some concept in the problem space. Even referential attributes do have meaning in the problem space. It is true that many such attributes are only identified as an artifact of the modelling formalism. Other formalisms would not require them - though they would require different formalisations of those concepts. But now to the meat - what is the purpose of a process model? Is it to describe the behaviour of the problem space? Or is it to describe the dynamics of the OOA -- which then represents the behaviour of the problem space? I believe that it is the latter. A complete SM model requires both static and dynamic aspects to be described. If, having created the static model, you then proceed to construct a separate dynamic model then many benefits of the rigorous data model are lost. Given that my aim is to construct a model of the problem space; and that I believe that a unified underlying formalism should underpin both the static and dynamic aspects of the model; then it follows that, where possible, processes should be defined to interact with the data model, and not introduce additional data concepts. Dave. Not speaking for GPS. -- Dave Whipp, Embedded Systems Group GEC Plessey Semiconductors, Plymouth, United Kingdom. PL6 7BQ. tel. +44 (0)1752 693277 mailto:david.whipp@gpsemi.com fax. +44 (0)1752 693306 http://www.gpsemi.com Subject: Re: (SMU) SMALL: Link/Unlink lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > The idea that the problem space is an RDBMS is (in the general > case) obviously false. But this doesn't prevent a table based > model of that problem space being constructed. Furthermore, > having constructed that model, every item of information in the > tables can mapped to some concept in the problem space. True. My point was that where relational attributes are concerned this is a mostly symbolic representation in an OOA so that it is misleading to think of it in terms of the same physical data accesses that would be implicitly involved with foreign keys in an RDBMS table. This is why I am uncomfortable with you view of relational attributes that focuses on them being data. > But now to the meat - what is the purpose of a process model? > Is it to describe the behaviour of the problem space? Or is it > to describe the dynamics of the OOA -- which then represents > the behaviour of the problem space? I disagree here, but I think this is just another flavor of our established disagreement over what the analyst is thinking about (i.e., the level of abstraction) when a relationship is activated (my view) or an instance is identified (your view). Thus my gut response here is that the analyst is always thinking in terms of the problem space and the process models merely represent a different (i.e., dynamic) view of that problem space. > I believe that it is the latter. A complete SM model requires > both static and dynamic aspects to be described. If, having > created the static model, you then proceed to construct a > separate dynamic model then many benefits of the rigorous > data model are lost. > > Given that my aim is to construct a model of the problem > space; and that I believe that a unified underlying formalism > should underpin both the static and dynamic aspects of the > model; then it follows that, where possible, processes should > be defined to interact with the data model, and not introduce > additional data concepts. Certainly the LBs are a new construct, but I don't think they represent a new data concept. The idea of relationships exists in the data model and the LBs operate on those relationships in the same manner that relational attribute writes operate upon them. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) SMALL: Link/Unlink - OOA of OOA lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Morrin... > In following the link brothers thread, I have > repeatedly seen statements that an action language > utilising the link brothers is the same as an > action language utilising direct writing of > referential attributes is the same as an ADFL. > > I am not sure that this has been adequately > demonstrated to be true. It would be nice if it > were, but I don't think that it is. I believe that it can be demonstrated rigorously, but being an Engineer at heart I can only offer an intuitive argument. I assert that when one writes to a relational attribute one is either activating or deactivating a relationship between two instances. (Let's ignore the complication of compound attributes; that just extends the write to a set of writes.) When the write accessor is invoked for a relational attribute it usually has to do some architectural maintenance to preserve relational integrity and provide adequate performance for subsequent relationship navigation. The nature of these tasks is strictly an implementation issue so they may be complex (updating sorted lists), trivial (setting a pointer value), or even a real foreign key write. The important issue is that there is some set of tasks that accessor must perform to ensure that the relational model's constraints are satisfied. I next assert that link/unlink, when invoked do exactly the same thing. They simply activate or deactivate a relationship, just as the relational attribute write did. In the implementation those processes will have to perform exactly the same functions that the relational write accessors did. In fact, I would expect that the implementation of, say, unlink would do exactly the same things as the processing in a write accessor for NOT PARTICIPATING. The only differences would be (a) shared identifiers are handled with multiple link/unlink, (b) only one link/unlink is necessary when there are compound identifiers, and (c) instances are identified by references. The mechanisms might be slightly different to support relationship navigation by reference, but the end results must be exactly the same. The last point seems to be a bone of contention. However, this does not bother me because there are only two ways to obtain an instance reference: by a Find using relational identifiers or by navigating relationships starting from the instance executing the action. Clearly the Find can't be a problem because it uses relational identifiers. I assert that the architecture must be able to correctly navigate relationships because whatever the architecture does is constrained by the relational model defined in the IM, which includes the relational identifiers. Put another way, those tasks that the link/unlink do must ensure that this navigation is correct. Since these tasks are basically the same tasks that the relational write accessor does, they should be correct or else relational write accessors would also be broken. > If we have an action language which does not allow > direct writing of referential attributes, using > the link brothers instead, then the syntax of the > action language will guarantee that the value > written to the referential is of the correct > datatype and no runtime checking is needed. First, the LBs do not have to write relational attributes any more that the relational write accessor needs to write relational attributes. The relational attributes are symbolic and do not necessarily imply any data store for identifiers. Since LBs are an alternative notation to relational write accessors they would directly invoke the same sort of implementation processing that a relational write accessor might. There are different levels of correctness that need checking in both cases. Some can be done at translation time and some needs to be done a run time or simulation. Since I believe the two mechanisms are equivalent, I don't see substantive differences in where the checking is done. At translation time one is essentially doing the static checking. For relational writes this comes down to such things as making sure the value data types are consistent with the identifier types and the attributes are actually in the objects that the syntax indicates. For link/unlink the checking is for things like ensuring that the indicated relationship exists in the IM for the current object. The details are different, but the basic idea is the same: the syntax of the specific action statement must be consistent with the IM. At simulation or run-time there is additional checking that needs to be done for both. For example, if the relationship is 1:1 and is already active, then one should not be able to activate another relationship with a link. For the relational write, there must be an existing instance with the relevant identifier. Again, the details are different but the basic idea is the same: the referential integrity for the specific instances needs to be checked. [In both cases I believe it should be possible to turn off this checking for a production system after the high level flow of control among actions has been properly sorted out via simulation.] -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) SMALL: Link/Unlink lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > Thank you, Steve, for your comments. The way I read them: > the theoretical case for the Link Brothers is not proven, > but the pragmatic/marketing case is very strong. Fascinating. The way I read them, SMALL was developed based upon requirements that ensured that the usage of LBs were theoretically equivalent to the usage of relational attribute write accessors. I also inferred that the marketing/pragmatic issues determined Why an orthogonal representation was developed. > Mike Finn has made the point that ADFDs and SMALL must be > different, equivalent, notations for the same formalism. > I completely agree. ADFDs currently use referential attribute > accesses, so either either these must be changed, or the > formalism must be defined in a way that admits both > styles. In this latter case, it should be possible to > define an action language that uses referential attributes. > This could use SMALL syntax. > > I think this position could keep everyone happy. However, > if the formalism becomes more complex as a result, then > people writing translation engines will be inconvenienced. > Many optimisations rely on template matching on the model. > If there's more than one way to specify something (at the > level of the formalism) then these templates become more > complex. The current tools either operate from action languages or from ADFDs, but they don't do both. I see no problem with a tool vendor deciding to use SMALL _or_ ADFDs. If SMALL is chosen, then one gets to use the LBs. If the ADFD format is chosen, then one gets to use the relational writes. Since these two notations for dealing with relationships are equivalent, I don't see why one notational paradigm can't be used in ADFDs and the other in an action language. Vis a vis the point below, I see SMALL and ADFDs as orthogonal notations for describing action dynamics. > The non-othogonality is also expressed in the fact that > some attributes are readable, and others aren't. If I > want to read the value of a referential attribute then > I have to follow the links all the way to the identifier > which provides the value. I am not sure I follow this; maybe we have different definitions of "orthogonal". It seems to me one has to follow the relationships in either case and the two mechanisms for doing this are independent (orthogonal). If I have the following in the IM A <---R1---> B <----R2---> C <----R3---> D and I am in an action of A and I want to get any attribute from D, I would have to get from A to D via R1, R2, and R3. In the link case I might have a notation like (I don't have my SMALL syntax handy): ref_d = this -> R1 -> R2 -> R3 x = ref_d.attr_d However, if I do this is an ADFD I will have a nest of accessor processes that extract successive relational identifiers until I get the D instance. These accessors will be followed by one that actually extracts the desired attribute from the data store using the previously obtained identifiers to identify the store. This may be wordier, but it does exactly the same thing in an independent manner (using the IM relational attributes rather than the IM relationships themselves). While it is true that using LBs does not require referential attributes to be readable (i.e., a reference can always be obtained by using Find or by navigating relationships from a reference in hand), I don't see why this affects the orthogonality. Referential attributes are symbolic, so if one notational approach chooses a symbolic read and the other doesn't, this just seems to support the idea that those approaches are orthogonal. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) SMALL: Link/Unlink - OOA of OOA -Reply Mike Morrin writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lahman: I think that my point has gone sailing past without making contact. >Responding to Morrin... >> In following the link brothers thread, I have >> repeatedly seen statements that an action language >> utilising the link brothers is the same as an >> action language utilising direct writing of >> referential attributes is the same as an ADFL. >> >> I am not sure that this has been adequately >> demonstrated to be true. It would be nice if it >> were, but I don't think that it is. > I assert that when one writes to a relational attribute one is >either activating or deactivating a relationship between >two instances >I next assert that link/unlink, when invoked do exactly the same thing. >The mechanisms might be slightly different to support relationship >navigation by reference, but the end results must be exactly the same. I agree up to here, BUT: I belive that equivalence is broken if there is a SINGLE CASE where code generated from a system modelled with one behaviour modelling syntax cannot be generated from the same system modelled with the other syntax. >There are different levels of correctness that need checking in both >cases. Some can be done at translation time and some needs >to be done a run time or simulation. Since I believe the two mechanisms >are equivalent, I don't see substantive differences in where the checking >is done. I disagree here completely. Rule checking which is done at analysis or translation time is fundamentally different from rule checking which can only be done at run time. Clearly the analysis is different in each case, and if the difference is forced by the modelling syntax, then the syntax is not equivalent. My point is not that the behaviour of the resulting code is different, but that the (behaviour) analysis is different to achieve the equivalence of resulting behaviour. To put it another way, I think that the modelling syntax is only equivalent if a model done with one syntax can be automatically converted to the other syntax with no change of behaviour. I think that this can be shown to be impossible in the case under discussion. Coming back around, I still think that the OOA of behaviour analysis is derived from the analysis syntax, and thus the OOA of OOA is dependent on the behaviour modelling syntax. Mike Morrin Subject: Re: (SMU) SMALL: Link/Unlink - OOA of OOA -Reply lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Morrin... > I belive that equivalence is broken if there is a > SINGLE CASE where code generated from a system > modelled with one behaviour modelling syntax > cannot be generated from the same system modelled > with the other syntax. I agree. So far as I know the only situation where the two approaches would produce a different semantic result is when removing one of two conditional relationships that share an identifier. In that rare case I believe SMALL has fixed a glitch in the methodology. > I disagree here completely. Rule checking which > is done at analysis or translation time is > fundamentally different from rule checking which > can only be done at run time. Clearly the > analysis is different in each case, and if the > difference is forced by the modelling syntax, then > the syntax is not equivalent. I agree with the second sentence. I disagree with the last sentence. I partially disagree with the first sentence. B-) I argue that there are only two types of rules checking: during translation and at execution time, which includes model simulation and production run-time. Translation checks the static rules. (Some tool vendors include some static rule checking as you enter models into the tool, but this is only a convenience and can be viewed as simply repositioning some elementary translations.) The dynamic checking can only be done at execution time when one has specific instances in hand. My point was that the ADFD and SMALL approaches both require static and dynamic checking. (This was in response to your earlier message that seemed to indicate that run-time checking could be eliminated by using SMALL.) The details may be different, but I believe the nature of the checking is still to verify relational integrity and that is common to both approaches. > My point is not that the behaviour of the > resulting code is different, but that the > (behaviour) analysis is different to achieve the > equivalence of resulting behaviour. > > To put it another way, I think that the modelling > syntax is only equivalent if a model done with one > syntax can be automatically converted to the other > syntax with no change of behaviour. I think that > this can be shown to be impossible in the case > under discussion. I believe it is possible for the reasons I gave. Why do you think it is impossible? However, even if I agreed I would still think that the behavior of the resulting code is the whole point of the exercise. The ADFD version has a goal (among others) of supporting code production that satisfies the relational model for the problem space as defined by relationships and relational identifiers in the IM when syntactic constructs in the ADFD are translated. Now if one substitutes "SMALL" for "ADFD" in that sentence, it is still valid -- which says to me that the end result is where the beans are counted. > Coming back around, I still think that the OOA of > behaviour analysis is derived from the analysis > syntax, and thus the OOA of OOA is dependent on > the behaviour modelling syntax. I agree that the OOA of OOA would be slightly different for doing ADFDs than for doing SMALL because it does describe notational elements and the elements are different. But I think what matters is the semantics, as represented by the relational model. That is not changed. Like the shadows on the wall of Plato's cave, the OOA of OOA is a syntactic representation of an underlying, invariant theory. SMALL may change the syntax and the details of the produced code, but the reality of the relational model is preserved. My basic contention is that so long as that reality is always properly preserved in the code, then the approaches are equivalent. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: (SMU) Conditional and unconditional relationships: any difference? David Stone writes to shlaer-mellor-users: -------------------------------------------------------------------- I'd like to ask a question which seems foolish: what is the difference between a conditional and an unconditional relationship? To put it another way: what checks can an architecture make, either at translation time or at run-time, to check that a relationship declared as unconditional is in fact so? pp45-46 of "Modeling the World in States" say at first that each action must ensure consistency of relationships. This would mean that an architecture could add a check at the end of each action. However, they then go on to permit sending an event which will cause the relationship to become consistent. This could be handled at any future time, and an arbitrary amount of processing could be done before the relationship becomes consistent. Thus the rest of the model must assume that the relationship is conditional, even if it is declared as unconditional. I suppose an architecture could check that each instance of an object in an unconditional relationship is linked at some point in its lifetime, though this seems a very weak check. Has anybody any better ideas? Or have I misunderstood entirely? -- David Stone Sent by courtesy of, but not an official communication from: Simoco Europe, P.O.Box 24, St Andrews Rd, CAMBRIDGE, CB4 1DP, UK Subject: Re: (SMU) Conditional and unconditional relationships: any Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- David Stone wrote: > I'd like to ask a question which seems foolish: what is the > difference between a conditional and an unconditional relationship? > > To put it another way: what checks can an architecture make, > either at translation time or at run-time, to check that a > relationship declared as unconditional is in fact so? I don't think this is a foolish question at all. As ever: my answer is a personal one. The conditionality and cardinality of a relationship are not constraints on its formation - they are constraints on its use. More importantly, they are assumptions that an architecture is allowed to make. If a relationship is unconditional then the architecture can assume that its navigation will not result in a null pointer. Defensive programmers may react in horror to making that assumption: but there are times when it may be useful. If you are building a complex translation engine that you may be able to utilise the fact that, if you follow the trail of solicited events from an action that modifies a referential attribute: then, at the end of the chain, the relationship must be properly formed. I've only ever used this property for hand generated code. It is common for the analysts and architicts to agree more restrictive definitions. For example, it is common to agree that a 1:1 relationship will never have duplicate values of formalising attributes; even though the method allows that scenario to exist in a transitionary period. There is a point of disagreement that was discussed a few months ago on this list. I argued that the value of the referential attributes that formalise a relationship are constrained to be the value of the identifier of an existing instance of the related object if the formalised relationship is unconditional; and that there are no such constraints for a conditional relationship. (Of course, all such constraints are relaxed for a subsequent chain of solicited events). Others argued that the values are always contrained by the instances of the related objects - the difference being that the formalising attributes of conditional relationships have NOT PARTICIPATING added to their domain. (Both these definitions make the base assumption that the domain of a referential attribute is based on the domain of the related attribute). As often happens, my theoretical viewpoint was overruled by the pragmatists (who read the PT training foils). It might have something to do with the colvoluted nature of the sentence that describes my viewpoint ;-). To answer your basic question: given the most liberal (but legal) interpretation of the method, there are no checks that a simulator (or architecture) can make when a referential attribute is written (except that the value must be a member of the attribute's domain). Most people, however, don't use the most liberal interpretation. Dave. -- Dave Whipp, Embedded Systems Group GEC Plessey Semiconductors, Plymouth, United Kingdom. PL6 7BQ. tel. +44 (0)1752 693277 mailto:david.whipp@gpsemi.com fax. +44 (0)1752 693306 http://www.gpsemi.com Subject: Re: (SMU) Conditional and unconditional relationships: any croche@tellabs.com writes to shlaer-mellor-users: -------------------------------------------------------------------- > David Stone writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > > I'd like to ask a question which seems foolish: what is the difference > between a conditional and an unconditional relationship? I'm really not sure of your question, but in terms of modelling, an unconditional relationship means that if both objects participating in a particular relationship exists, they MUST be related at all times. A conditional relationship means that the two objects MAY/MAY NOT be related at any time(1c:1c). > > To put it another way: what checks can an architecture make, either at > translation time or at run-time, to check that a relationship declared > as unconditional is in fact so? > > pp45-46 of "Modeling the World in States" say at first that each > action must ensure consistency of relationships. This would mean that > an architecture could add a check at the end of each action. However, > they then go on to permit sending an event which will cause the > relationship to become consistent. This could be handled at any > future time, and an arbitrary amount of processing could be done > before the relationship becomes consistent. Thus the rest of the > model must assume that the relationship is conditional, even if it is > declared as unconditional. > > I suppose an architecture could check that each instance of an object > in an unconditional relationship is linked at some point in its > lifetime, though this seems a very weak check. I believe that this can be checked by fragment generation. Specifically when the action language of the model is parsed when a "select...related by" (i.e. select one dog related by owner->DG[R1];) is encountered, the code that will be generated can specifically check that the unconditional relationship DOES exist: in archetype language, the OOA of OOA model can be checked to see if the left object (owner) in the current link (owner->DOG) being parsed is related to a right object instance (DOG; i.e. if (not_empty right_obj) ) when it should be. I believe this would be a run-time check. Of course then if the right_obj handle returns "empty" you must have a way of handling this error, even though it truly is an analyst error (i.e. the analyst should always relate any newly created object instance to other object instance(s) it is unconditionally related to and likewise for the delete). > > Has anybody any better ideas? Or have I misunderstood entirely? I hope this helps. > > -- > David Stone > Sent by courtesy of, but not an official communication from: > Simoco Europe, P.O.Box 24, St Andrews Rd, CAMBRIDGE, CB4 1DP, UK > Christina Roche Subject: Re: (SMU) Conditional and unconditional relationships: any lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Stone... > I'd like to ask a question which seems foolish: what is the difference > > between a conditional and an unconditional relationship? I agree with Whipp that this is not a foolish question at all. There are some rather nasty problems for the RD in trying to deal with this difference. First, for context, let me clarify that there are really two types of conditional relationships in practice. The more common type exists when the conditionality lives and dies with the related instances. For example, the relationship Person <--- 1:Mc --->Pet says that a Person may own 0 or more Pets and that a Pet is always owned by a Person. In this case, whenever we have a Pet instance, there will be a relationship with a Person. Conversely, as soon as the Pet is whacked at the Pet Euthanasia Center, that particular relationship goes away. I think of this as a non-temporal conditional relationship because it depends solely upon the existence of one or both of the instances. [Note that two Persons could swap Pets. In this case the related instances change, but the relationship endures from the Pet's viewpoint.] A second, less common, type of conditional relationship occurs when both the related instances can exist beyond the relevance of the relationship. Suppose we change the relationship slightly to be Person <--- 1c:Mc ---> Critter Now we are allowing Persons and Critters to coexist independently. If our application changes from a Pet Euthanasia Center to a Critter Placement Bureau, it is entirely possible that a given Critter instance my go through cycles of ownership, depending upon its disposition and the guarantees made by the Bureau. I think of this as temporal conditionality because the relationship exists or not depending upon when it is observed in the application's execution. > To put it another way: what checks can an architecture make, either at > > translation time or at run-time, to check that a relationship declared > > as unconditional is in fact so? I agree with Whipp here -- checking is not the primary issue. The conditionality and cardinality represent constraints that affect the way the translation produces code. Ideally, in a production system built from a correct OOA there should be little need for checking anything about the relationships. In practice one often does check things like NULL pointers to be able to exit gracefully in case the OOA or the translator is not as correct as one might wish -- but this is driven by human frailty rather than a need to validate or enforce relational integrity. Simulation, of course, is another matter. In this context one expects problems so checking is routinely done. Similarly, developers like to identify mistakes early so CASE tools tend to do translation-style static checking as the model elements are placed in the database. Now to tie in my temporal vs. non-temporal distinction above... The architecture is going to have to supply some sort of efficient mechanism for navigating relationships. How it does that is determined by the cardinality and conditionality of the relationships. Typically cardinality will drive the overall data structure while conditionality will drive the need for value testing, flag values, and dynamic allocations. For example, if I am doing the Pet example above, I might be able to allocate fixed arrays of pointers for each owner at the time the domain is initialized because the number of Pets owned might be known. Clearly this would not be the case for the Critter situation and I would probably have to use some sort of dynamic allocation scheme combined with flag values for the NOT PARTICIPATING cases. While this is a stretch and faintly silly, it illustrates the point that architectural mechanisms are the real issue rather than checking. If consistent mechanisms are used, then everything should Just Work without the need for checking. > pp45-46 of "Modeling the World in States" say at first that each > action must ensure consistency of relationships. This would mean that > > an architecture could add a check at the end of each action. However, > > they then go on to permit sending an event which will cause the > relationship to become consistent. This could be handled at any > future time, and an arbitrary amount of processing could be done > before the relationship becomes consistent. Thus the rest of the > model must assume that the relationship is conditional, even if it is > declared as unconditional. Things will only Just Work if the analyst has done a proper job. This usually means that everything has been properly defined within a state action. For example, if an instance is created, the analyst should make sure all relational attributes for unconditional relationships are written before leaving the action. Assuming the action boundary makes life easier for both the analyst and the architecture. Consider the contra case where I create an instance in one action and set the relational identifiers in a second action. As an analyst I need to be quite sure that no other action is going to access those relational identifiers before the second action completes. I have two choices: provide for it explicitly in the OOA or con the Architect into allowing me to colorize the model so that the architecture can maintain consistency for me. The last is not a good idea in my mind because it hides an important facet of the problem space in the RD (assuming one views colorization as part of the RD rather than the OOA). It also potentially creates a lot of headaches for the architecture. However, dealing with it explicitly in the OOA tends to not be much better. If you can use a self-directed event, this is pretty straight forward. In most other cases it gets downright ugly. Thus I tend to belong to the Don't Do That School and I try to use synchronous services to get everything done within an action wherever possible. So what's the point? The architecture selects and implements mechanisms to support relationship navigation based primarily upon static things like cardinality and conditionality. The translation implements those mechanisms in a manner that assumes consistency with the IM's relational model. It is up to the analyst not to rock the boat by creating situations where the relational model is invalid. This is fairly simple to do within action boundaries, but once consistency spans action boundaries in the OOA, it may require heroic effort to keep control of things. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) Conditional and unconditional relationships: any differ "Lynch, Chris D. SD" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lahman responding to Stone... ------------------------------------------------------------------------ ---- ------------- >It is up to the analyst not to rock the boat by creating >situations where the relational model is invalid. This is fairly simple >to do within action boundaries, but once consistency spans action >boundaries in the OOA, it may require heroic effort to keep control of >things. ------------------------------------------------------------------------ ---- ------------- According to my understanding of SM OOA, leaving the action without ensuring relational integrity is not only a bad idea, but illegal. I think the state modeling book is misleading when it says the action can terminate having only *sent* the events which will make the system consistent. I believe this is true ONLY with respect to processing an external event, (i.e., the event has not "rippled through" yet, hence the "system is not yet consistent") but does not allow relational integrity to be in doubt between actions. The practical means of following this rule is to use synchronous services (at the model level) for instance creation, deletion, and relationship manipulation, so that all the necessary "integrity stuff" (create, delete, link, etc.) can be accomplished during one action. Hope this helps, Chris ------------------------------------------------------------------------ ---- ----------------------- Chris Lynch Abbott AIS San Diego, CA ------------------------------------------------------------------------ ---- ----------------------- Subject: Re: (SMU) Conditional and unconditional relationships: any differ lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lynch... > According to my understanding of SM OOA, leaving the action without > ensuring relational integrity is not only a bad idea, but illegal. I certainly agree that it is a bad idea, but I don't think it is illegal. > I think the state modeling book is misleading when it says the action > can terminate having only *sent* the events which will make the > system consistent. I believe this is true ONLY with respect to > processing an external event, (i.e., the event has not > "rippled through" yet, hence the "system is not yet consistent") > but does not allow relational integrity to be > in doubt between actions. You may be right about the intention, but that isn't what was said. I am skeptical of your interpretation because the system being brought into consistency is the Application and the external event may be truly external in that it goes outside the Application. In that situation the analyst would still have to provide for the consistency explicitly because the world beyond the Application is not necessarily deterministic. Given that one has to deal with it in this case (external events outside the Application), the extension to domains (wormhole events) or state machines (events to other state machines) is straight forward. I was also under the impression that one of the reasons for introducing priority for self-directed events in OOA96 was to provide an architectural mechanism to assist the analyst in dealing with consistency across actions. > The practical means of following this rule is to use synchronous > services (at the > model level) for instance creation, deletion, and relationship > manipulation, > so that all the necessary "integrity stuff" (create, delete, link, > etc.) > can be accomplished during one action. I agree that this is the preferable approach. I am not convinced, though, that all problem space issues can be handled this way. The following example is contrived, but I think it illustrates the type of thing I am worried about. Consider the creation of an instance, Ai, that must be unconditionally related to some other instance, Bj, and that all the B instances already exist. The particular Bj that needs to be related can only be determined by querying some external source via an event. The tricky part is that the query event must supply a hash value that only Ai can compute (i.e., the problem space dictates that the hash calculation is a pure operation on A's data so it belongs in A's state machine) and the external response event will provide a j that depends upon this value in some mystical way. In practice there will almost always be ways around this. The query event might be convertible to a synchronous wormhole or something. But there will clearly be situations where this will not be possible. The other way around the problem is to move the calculation of the hash value out of Ai. In some cases this might work, but I think there would be others where it would be dubious because it would break the basic OO principle that operations on data are encapsulated with the data. S-M isn't too worried about this per se since its unit of reuse is the domain, but I still think it's a good idea. But beyond that argument, I can make the example more complicated where the calculation of the hash value itself requires multiple states within A. Admittedly a low probability of encountering a real situation like that, but not impossible. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: (SMU) Prioritising Events. "Leslie Munday" writes to shlaer-mellor-users: -------------------------------------------------------------------- > >Responding to Lynch... > [snip] > >I was also under the impression that one of the reasons for introducing >priority for self-directed events in OOA96 was to provide an >architectural mechanism to assist the analyst in dealing with >consistency across actions. > 'ang on a mo' ... I may not be as familiar with OOA96 as I should be, but the above paragraph doesn't sound correct to this analyst. How can an anlayst give priority to self-directed events? This doesn't appear to make sense. Since each object instance is executing indepently of each other object instance, the only way one can prioritise events, is if there is a queue of events to the same object instance. So how is introducing priority for self-directed events, in the architecture, helping the analyst? I'm going to guess that the writer meant that the mechanism assists the architect in ensuring consistency. Leslie Munday. Subject: Re: (SMU) Conditional and unconditional relationships: any differ "Lynch, Chris D. SD" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lahman: >I was also under the impression that one of the reasons for introducing >priority for self-directed events in OOA96 was to provide an >architectural mechanism to assist the analyst in dealing with >consistency across actions. I missed that one. I think the the prime mover for that feature is to allow intuitive processing of "done" events, so that a state which only exits on a self-directed "done" can avoid receiving an event from another object before its own "done". >I agree that this is the preferable approach. I am not convinced, >though, that all problem space issues can be handled this way. The >following example is contrived, but I think it illustrates the type of >thing I am worried about. >Consider the creation of an instance, Ai, that must be unconditionally >related to some other instance, Bj, and that all the B instances already >exist. The particular Bj that needs to be related can only be >determined by querying some external source via an event. The tricky >part is that the query event must supply a hash value that only Ai can >compute (i.e., the problem space dictates that the hash calculation is a >pure operation on A's data so it belongs in A's state machine) and the >external response event will provide a j that depends upon this value in >some mystical way. >In practice there will almost always be ways around this. The query >event might be convertible to a synchronous wormhole or something. But >there will clearly be situations where this will not be possible. The >other way around the problem is to move the calculation of the hash >value out of Ai. In some cases this might work, but I think there would >be others where it would be dubious because it would break the basic OO >principle that operations on data are encapsulated with the data. S-M >isn't too worried about this per se since its unit of reuse is the >domain, but I still think it's a good idea. But beyond that argument, >I can make the example more complicated where the calculation of the >hash value itself requires multiple states within A. Admittedly a low >probability of encountering a real situation like that, but not >impossible. Maybe we have a style and/or interpretation difference here. I would say that the relationship in your example is conditional rather than unconditional. After all, what does a third object do when it tries to follow the relationship to the "unconditionally" linked instance and finds that there is none? Block, knowing that the link is "coming soon"? Throw a malfunction, knowing that the link is broken and never coming? Beyond your example, however, I do see the need to address data integrity *during* an action, I've suggested to Ms. Shlaer that some sort of lockouts and/or transaction processing may be desirable additions to OOA to support the "simultaneous interpretation of time". ------------------------------------------------------------------------ -- Chris Lynch Abbott AIS San Diego CA ------------------------------------------------------------------------ -- Subject: Re: (SMU) Conditional and unconditional relationships: any differ lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lynch... > >I was also under the impression that one of the reasons for > introducing > >priority for self-directed events in OOA96 was to provide an > >architectural mechanism to assist the analyst in dealing with > >consistency across actions. > > I missed that one. I think the the prime mover for that feature is to > > allow intuitive processing of "done" events, so that a state which > only exits on a self-directed "done" can avoid receiving an event > from another object before its own "done". I agree, which is why I said "...one of the reasons". I don't have a specific basis for the impression, though I *think* the idea came up in the threads when OOA96 was introduced. > >Consider the creation of an instance, Ai, that must be > unconditionally > >related to some other instance, Bj, and that all the B instances > already > >exist. The particular Bj that needs to be related can only be > >determined by querying some external source via an event. The tricky > > >part is that the query event must supply a hash value that only Ai > can > >compute (i.e., the problem space dictates that the hash calculation > is > a > >pure operation on A's data so it belongs in A's state machine) and > the > >external response event will provide a j that depends upon this value > > in > >some mystical way. > > Maybe we have a style and/or interpretation difference here. I would > say that the relationship in your example is conditional rather than > unconditional. After all, what does a third object do > when it tries to follow the relationship to the "unconditionally" > linked > instance and finds that there is none? Block, knowing that the > link is "coming soon"? Throw a malfunction, knowing that the > link is broken and never coming? You are correct, the Bj needs to be created at the same time as the Ai if the relationship is unconditional. As a megathinker dealing in Grand Concepts, attention to detail has never been a one of my strong points. However, I don't think this significantly affects the problem I postulated. The Ai has to be created in one action to build the hash value but the Bj can't be created (and the relationship defined) until another action because of the need to process the event to determine which Bj to create. Consistency is still not possible until both actions have completed. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: (SMU) OOA of TCP/IP? "Stephen R. Tockey" writes to shlaer-mellor-users: -------------------------------------------------------------------- Does anyone out in the vast listening audience have (or know of) an OOA model of TCP/IP that they might be willing to share? We're in need of an implementation of TCP/IP in Java to use on an in-house all-silicon JVM (a cpu whose instruction set is the JVM bytecodes). The only TCP/IP in Java appears to be part of Sun's JavaOS (which we don't need and which they don't want to unbundle the TCP/IP from), so it looks like we might end up having to build one from scratch. I can get a big jump on things if I can get an OOA model of it rather than starting from a blank slate. I'm hoping that someone has already modeled it as a service domain for one of their systems. Thanks in advance. -- steve Subject: Re: (SMU) Prioritising Events. baryh@airmail.net (Bary Hogan) writes to shlaer-mellor-users: -------------------------------------------------------------------- On Wed, 28 Jan 1998 09:43:10 -0000, you wrote: >"Leslie Munday" writes to shlaer-mellor-users: >-------------------------------------------------------------------- [snip] > >'ang on a mo' ... > >I may not be as familiar with OOA96 as I should be, but the above paragraph >doesn't sound correct to this analyst. > The OOA96 rule is as follows. "RULE (expedite self-directed events): If an instance of an object sends an event to itself, that event will be accepted before any other events that have yet to be accepted by the same instance." > >How can an anlayst give priority to self-directed events? This doesn't >appear to make sense. Since each object instance is executing indepently of >each other object instance, the only way one can prioritise events, is if >there is a queue of events to the same object instance. > As you can see from the rule above, it only applies if an object instance generates the event to the same object instance. If the event is generated to a different instance of the same object, then this rule does not apply. >So how is introducing priority for self-directed events, in the >architecture, helping the analyst? > >I'm going to guess that the writer meant that the mechanism assists the >architect in ensuring consistency. > I think the architecture must enforce the rule, and therefore the analyst can take advantage of the rule in development of state models. It is a big help for the following reasons. Without the rule, the analyst must consider what to do if any other events are accepted before the self-directed event. And to make matters worse, if you accept one or more other events before the self-directed event, you must consider what to do when the self-directed event is accepted from any other states to which you may have transitioned. Before this rule, we avoided self-directed events like the plague. In the architecture that I use, we implement self-directed events as direct procedure calls to the next state action, thus not allowing any other events to be taken off the queue. Bary Hogan LMTAS Subject: (SMU) PBX Model "Duane Perry" writes to shlaer-mellor-users: -------------------------------------------------------------------- Is there a source, book, paper (stop me when you see a pattern) that will help will model block? I have to model a pseudo-PBX system and cannot seem to get started. I keep drawing the same image over and over. x lines in to y phones with conference combinations. I am stuck. Duane Subject: Re: (SMU) Prioritising Events. bgrim@ses.com (Bob Grim) writes to shlaer-mellor-users: -------------------------------------------------------------------- > In the architecture that I use, we implement self-directed events as direct > procedure calls to the next state action, thus not allowing any other events to > be taken off the queue. The danger with that approach is that you need to ensure that the self-directed event is generated at the end of its particular block of action language within a state. State foo State bar --------- ----------- //begin action language //begin action language blah blah blah blah blah blah . . . . . . . . . . Generate SelfDirectedEvent which takes you . to the bar state . . . . . . //end action language . //end action language In this case, during simulation, state foo would execute in its entirety and then state bar would execute. In the code (because of the direct function call of the state bar), half of foo would execute, then all of bar would execute, and then foo would finish up. This difference might not be a big deal but it certainly could be. Generating the event at the end of foo rather than the middle gets rid of the problem. Thanks Bob Grim Subject: Re: (SMU) Prioritising Events. yoakley@oiinc.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Bary Hogan wrote: > > > Before this rule, we avoided self-directed events like the plague. > > In the architecture that I use, we implement self-directed events as direct > procedure calls to the next state action, thus not allowing any other events to > be taken off the queue. > > Bary Hogan > LMTAS Just a quick caveat. If the self-directed events are implemented as direct procedure calls AND the calls are made from within the action (basically replacing the asynchronous event generation), then you have to be aware that you have opened the door to violating the event ordering rule and also that you may cause issues due to data inconsistencies. Fortunately, the OOA patterns which cause trouble are a little uncommon so there are practical ways to deal with these issues and still get the benefits of synchronous events. David Yoakley Subject: Re: (SMU) PBX Model "Stephen R. Tockey" writes to shlaer-mellor-users: -------------------------------------------------------------------- > "Duane Perry" writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Is there a source, book, paper (stop me when you see a pattern) that will > help will model block? I have to model a pseudo-PBX system and cannot seem > to get started. I keep drawing the same image over and over. x lines in to > y phones with conference combinations. I am stuck. Let me see if this helps... The "essence" of a PBX system is to dynamically maintain a set of connections between 'terminals'. Consider the terminals as a relatively static (population- wise) object/class. Consider terminal-to-terminal connections (e.g., Subscriber A makes a call to Subscriber B) to be a dynamically created & deleted associative object/class between instances of terminals. So I think the basic model is quite simple: an object/class for the terminals and an associative object/class which represents connections. The model can become more complex as you bring in distinctions between types of terminals, such as subscriber lines vs. trunk lines. Conference combinations might be representable by another class that groups subsets of the connections. Also, don't get too hung up on the dynamics (states & processes) from the terminal/subscriber perspective. A lot of the dynamics are, IMHO, centered on the associative object/class representing the connection. You might want to take a look at Jacobsen's book, he uses a PBX as one of his examples. I don't recall how S-M friendly it was, but I recall that it did treat the connection as a separate object/class and I think that's the key idea. Hope this helps, -- steve Subject: Re: (SMU) PBX Model Leora Bell writes to shlaer-mellor-users: -------------------------------------------------------------------- I know of no book or paper that specifically addresses "model block", but because I was a technical writer for eight years, I can tell you that "writer's block" and "model block" have the same root cause. There's a lot of literature that deals with writer's block, much of it is online. Go to altavista and search on "writer's block." Duane Perry wrote: > > "Duane Perry" writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Is there a source, book, paper (stop me when you see a pattern) that will > help will model block? I have to model a pseudo-PBX system and cannot seem > to get started. I keep drawing the same image over and over. x lines in to > y phones with conference combinations. I am stuck. > > Duane -- Leora H. Bell Senior Member of Technical Staff Digital Systems Division 1000 Remington Blvd. Bolingbrook, IL 60440-4955 ---------------------------------- Telephone: 630.679.3377 Fax: 630.679.3494 email: lbell@tellabs.com Mail Stop: 367 Cube: B3078-R Subject: Re: (SMU) Prioritising Events. lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to yoakley... > Just a quick caveat. If the self-directed events are implemented > as direct procedure calls AND the calls are made from within > the action (basically replacing the asynchronous event generation), > then you have to be aware that you have opened the door to > violating the event ordering rule and also that you may cause > issues due to data inconsistencies. Fortunately, the OOA patterns > which cause trouble are a little uncommon so there are > practical ways to deal with these issues and still get the > benefits of synchronous events. While your point about data consistency is well-taken, I would like to be sure I understand what you meant by violating the event ordering rule. I assume you are referring to the rule that says events between the same two instances must be processed in the same order as they were issued. Are you referring to the possibility of nested events where an action issues two self-directed events and the target action of the first event issued also generates a self-directed event? -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: RE: (SMU) PBX Model "Lynch, Chris D. SD" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Perry: >Is there a source, book, paper (stop me when you see a pattern) that will >help will model block? I have to model a pseudo-PBX system and cannot seem >to get started. I keep drawing the same image over and over. x lines in to >y phones with conference combinations. I am stuck. > >Duane Have you tried modeling it the other way around, i.e., from the PhoneCall perspective? (I assume you're at the application level.) The call comes into existence when someone local goes OffHook or there is an incoming call. Limitations on how people join the conference call are part of the call state-model or a Call/Subscriber assigner. The IM shows numeric limitations on numbers of participants, as well as things like multi-line phones, etc. The IEEE press book on JSD and JSP has an extended example from a phone system. Another good reference would be any of Pamela Zave's BellLabs stuff on phone-system modeling. Hope this helps. -Chris Lynch Abbott AIS San Diego, CA Subject: Re: (SMU) Prioritising Events. "John D. Yeager" writes to shlaer-mellor-users: -------------------------------------------------------------------- I just can't resist one of my favorite subjects. A very simple case illustrating the problem would have two objects A and B. Consider two states of A - S1 and S2 where the event EA1 causes the transition from S1 to S2. If the state action for S1 is [...] Send self EA1 [...] Send b EB1 [...] where b is an instance of B and the state action for S2 is [...] Send b EB2 [...] Then implementing the "Send self EA1" as a direct function call at its point in the action can lead to the events EB2 and EB1 being reversed in order. The simple solution is to move the self-directed events to the end of the action. However, this may require marshalling the event data to allow sending it at the end -- typically avoiding marshalling is one of the reasons for wanting to make the call directly. The "move it to the end" doesn't solve the (rarer) case of sending oneself two events in the same action. John lahman wrote: > Responding to yoakley... > > Just a quick caveat. If the self-directed events are implemented > > as direct procedure calls AND the calls are made from within > > the action (basically replacing the asynchronous event generation), > > then you have to be aware that you have opened the door to > > violating the event ordering rule and also that you may cause > > issues due to data inconsistencies. > While your point about data consistency is well-taken, I would like to > be sure I understand what you meant by violating the event ordering > rule. I assume you are referring to the rule that says events between > the same two instances must be processed in the same order as they were > issued. Are you referring to the possibility of nested events where an > action issues two self-directed events and the target action of the > first event issued also generates a self-directed event? -- John Yeager Cross-Product Architecture Lucent Technologies, Inc., Bell Labs johnyeager@lucent.com Business Communications Systems voice: (732) 957-3085 200 Laurel Ave, 4C-514 fax: (732) 957-4142 Middletown, NJ 07748 Subject: Re: (SMU) Prioritising Events. Andrew Mangogna writes to shlaer-mellor-users: -------------------------------------------------------------------- > bgrim@ses.com (Bob Grim) writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > > State foo State bar > --------- ----------- > > //begin action language //begin action language > blah blah blah blah blah blah > . . > . . > . . > . . > . . > Generate SelfDirectedEvent which takes you . > to the bar state . > . . > . . > . //end action language > . > //end action language > > In this case, during simulation, state foo would execute in its entirety and > then state bar would execute. In the code (because of the direct function > call of the state bar), half of foo would execute, then all of bar would > execute, and then foo would finish up. This difference might not be a big deal > but it certainly could be. Generating the event at the end of foo rather > than the middle gets rid of the problem. > > Thanks > > Bob Grim Actually what solves the problem best is an architecture that can detect the circumstance and execute the transition only after the "foo" action has completed. This is a well known problem with devising software mechanisms to execute finite state machines and is an artifact of the transitory states that arise in the usual formulations of state machines that are used for software. It is not that difficult to solve, but it does mean that the transition execution must keep track of whether it is currently executing an action and store the event that causes the transition to "bar" away to be executed after the "foo" action. The fact that the event that causes the transition can have parameters complicates the matter a bit. My own experience in devising state machine execution engines is that to require the "direct execution" transition to be coded only at the end of the action is far to cumbersome and error prone. _______________________________________________________________________ Andrew Mangogna andrewm@slip.net Subject: Re: (SMU) Prioritising Events. David Yoakley writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > Are you referring to the possibility of nested events where an > action issues two self-directed events and the target action of the > first event issued also generates a self-directed event? > Thats the pattern. dy > -- > H. S. Lahman There is nothing wrong with me that > Teradyne/ATB could not be cured by a capful of Drano > 321 Harrison Av. L51 > Boston, MA 02118-2238 > (Tel) (617)-422-3842 > (Fax) (617)-422-3100 > lahman@atb.teradyne.com Subject: Re: (SMU) Prioritising Events. bgrim@ses.com (Bob Grim) writes to shlaer-mellor-users: -------------------------------------------------------------------- > > In this case, during simulation, state foo would execute in its entirety and > > then state bar would execute. In the code (because of the direct function > > call of the state bar), half of foo would execute, then all of bar would > > execute, and then foo would finish up. This difference might not be a big deal > > but it certainly could be. Generating the event at the end of foo rather > > than the middle gets rid of the problem. > > > > Thanks > > > > Bob Grim > > Actually what solves the problem best is an architecture that can > detect the circumstance and execute the transition only after the > "foo" action has completed. This is a well known problem with devising > software mechanisms to execute finite state machines and is an artifact > of the transitory states that arise in the usual formulations of state > machines that are used for software. It is not that difficult to solve, > but it does mean that the transition execution must keep track of whether > it is currently executing an action and store the event that causes the > transition to "bar" away to be executed after the "foo" action. The fact > that the event that causes the transition can have parameters complicates > the matter a bit. My own experience in devising state machine execution > engines is that to require the "direct execution" transition to be coded > only at the end of the action is far to cumbersome and error prone. > > _______________________________________________________________________ > > Andrew Mangogna > andrewm@slip.net > > Of course there are multiple ways a well thought out architecture can handle this situation. As far as your last sentence, I agree that it would be cumbersome. My point up above was that synchronous calling of a state from another state is error prone and, in my opinion, should be avoided. The only way it *might* work is to put the calls at the end of the action. I have always disliked the rule on giving self-generated events a higher priority. It seemed to me that Sally and Steve took a specific problem of a bigger issue and created a rule to satisfy that specific problem. I believe we should be able to put priorities on *any* event. For example, lets say I am designing the launch control software for a major aerospace company. One of the key features I would want to implement is the abort launch feature. I would want the event (or events) handling the abort launch to have the a very high priority to supercede most (if not all) other events going on in the system. Bob Subject: Re: (SMU) PBX Model Kenneth Cook writes to shlaer-mellor-users: -------------------------------------------------------------------- Try "How to Build Shlaer-Mellor Object Models". Starr. ISBN 0-13-207663-2. Chapter 8 "How to Avoid Model Hacking". -Ken At 11:17 PM 1/29/98 -0500, Duane Perry wrote: >"Duane Perry" writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >Is there a source, book, paper (stop me when you see a pattern) that will >help will model block? I have to model a pseudo-PBX system and cannot seem >to get started. I keep drawing the same image over and over. x lines in to >y phones with conference combinations. I am stuck. > >Duane > > > Subject: Re: (SMU) Prioritising Events. baryh@airmail.net (Bary Hogan) writes to shlaer-mellor-users: -------------------------------------------------------------------- I didn't intend to open up this issue, but since I did.... >lahman writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >Responding to yoakley... > >> Just a quick caveat. If the self-directed events are implemented >> as direct procedure calls AND the calls are made from within >> the action (basically replacing the asynchronous event generation), >> then you have to be aware that you have opened the door to >> violating the event ordering rule and also that you may cause >> issues due to data inconsistencies. Fortunately, the OOA patterns >> which cause trouble are a little uncommon so there are >> practical ways to deal with these issues and still get the >> benefits of synchronous events. > >While your point about data consistency is well-taken, I would like to >be sure I understand what you meant by violating the event ordering >rule. I assume you are referring to the rule that says events between >the same two instances must be processed in the same order as they were >issued. Are you referring to the possibility of nested events where an >action issues two self-directed events and the target action of the >first event issued also generates a self-directed event? >>[Yoakley's response:] >>Thats the pattern. >> >>dy I think the problem of data inconsistency is solved by generating the self-directed event (i.e. making the procedure call) at the end, as others have already pointed out. At this point the action is "complete" and the object should be in a consistent state and ready to transition to the next state. As for the pattern in which an action issues two self-directed events..., I have a hard time coming up with a situation in which anyone would really need to do this. So, my quick answer is DON'T DO THAT. However, I think that it is possible to build an architecture that detects this situation and deals with it. (But it is certainly not worth the effort that it would take!) Bary Hogan LMTAS Subject: Re: (SMU) Prioritising Events. baryh@airmail.net (Bary Hogan) writes to shlaer-mellor-users: -------------------------------------------------------------------- >"John D. Yeager" writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >Then implementing the "Send self EA1" as a direct function call at its >point in the action can lead to the events EB2 and EB1 being reversed >in order. The simple solution is to move the self-directed events to >the end of the action. However, this may require marshalling the event >data to allow sending it at the end -- typically avoiding marshalling >is one of the reasons for wanting to make the call directly. > It could be one of the reasons, but not the primary reason in this case. The primary reason in this case is to ensure that the self-directed event is received before any other events. Other reasons might include avoiding the overhead in using an asynchronous event queuing mechanism. In practice, I have not found saving the event data till the end of the action to be much of a problem. A data structure local to the action procedure in question usually does the trick. Bary Hogan LMTAS Subject: Re: (SMU) Prioritising Events. baryh@airmail.net (Bary Hogan) writes to shlaer-mellor-users: -------------------------------------------------------------------- >Andrew Mangogna writes to shlaer-mellor-users: >-------------------------------------------------------------------- > > >Actually what solves the problem best is an architecture that can >detect the circumstance and execute the transition only after the >"foo" action has completed. This is a well known problem with devising >software mechanisms to execute finite state machines and is an artifact >of the transitory states that arise in the usual formulations of state >machines that are used for software. It is not that difficult to solve, >but it does mean that the transition execution must keep track of whether >it is currently executing an action and store the event that causes the >transition to "bar" away to be executed after the "foo" action. The fact >that the event that causes the transition can have parameters complicates >the matter a bit. My own experience in devising state machine execution >engines is that to require the "direct execution" transition to be coded >only at the end of the action is far to cumbersome and error prone. > I'll have to disagree. In practice, I haven't found it to be cumbersome at all (and it works very well). It can be error prone when code generation is done manually. It is easy to forget to move the "direct execution" to the end. However, if automated, I don't see how it would be error prone (other than a few rare situations that have been discussed previously). Bary Hogan LMTAS 'archive.9802' -- Subject: Re: (SMU) Prioritising Events. lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Hogan... > I think the problem of data inconsistency is solved by generating the > self-directed event (i.e. making the procedure call) at the end, as > others have > already pointed out. At this point the action is "complete" and the > object > should be in a consistent state and ready to transition to the next > state. > > As for the pattern in which an action issues two self-directed > events..., I have > a hard time coming up with a situation in which anyone would really > need to do > this. So, my quick answer is DON'T DO THAT. However, I think that it > is > possible to build an architecture that detects this situation and > deals with it. > (But it is certainly not worth the effort that it would take!) Before OOA96 introduced a notation for depth-first iterations, these had to be done with self-directed events. One state action would extract the set of instances and generate an event for each member. Those events would go to the action that would perform the sequential processing (loop body) on each one. Many operations on ordered sets needed to be done this way. If we were still doing ADFDs we would probably still do it this way simply because we feel the OOA96 notation was klutzy. Aside from depth-first iterations, there is another, more common reason for using two or more self-directed events: you need to make sure the state machine is in a particular state to receive external events when processing is done. Suppose I have three states, S1, S2, and S3. Also suppose there are two ways to trigger processing that must eventually get to S3. ---> S2 ---- ---> S3 or ---> S1 ---- ---> S2 ---- ----> S3 In the first case external events take care of all the transitioning. But in the second case the tranisitoning events must be generated internally by the state machine. S2 can't generate the event to S3 because that would be premature in the first case and it cannot know how the FSM came to be in S2. Therefore, S1 has to generate both events -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) Prioritising Events. "Leslie Munday" writes to shlaer-mellor-users: -------------------------------------------------------------------- -----Original Message----- From: lahman To: shlaer-mellor-users@projtech.com Date: Tuesday, February 03, 1998 12:03 AM Subject: Re: (SMU) Prioritising Events. >lahman writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >Aside from depth-first iterations, there is another, more common reason >for using two or more self-directed events: you need to make sure the >state machine is in a particular state to receive external events when >processing is done. Suppose I have three states, S1, S2, and S3. Also >suppose there are two ways to trigger processing that must eventually >get to S3. > >---> S2 ---- ---> S3 > >or > > ---> S1 ---- ---> S2 ---- ----> S3 > >In the first case external events take care of all the transitioning. >But in the second case the tranisitoning events must be generated >internally by the state machine. S2 can't generate the event to S3 >because that would be premature in the first case and it cannot know how >the FSM came to be in S2. Therefore, S1 has to generate both events > I don't find a reason for doing example 2. Surely you would want to generate a transition from S1 to S3, perhaps through a new state S4, or, you would generate an event to transition to S2 and let S2 take care of a transition to S1, if required. I'd be intersted to see under what circumstances state S1 would want to generate 2 events to transition to S3 via S2. Les. Subject: Re: (SMU) Prioritising Events. baryh@airmail.net (Bary Hogan) writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lahman... > >Aside from depth-first iterations, there is another, more common reason >for using two or more self-directed events: you need to make sure the >state machine is in a particular state to receive external events when >processing is done. Suppose I have three states, S1, S2, and S3. Also >suppose there are two ways to trigger processing that must eventually >get to S3. > >---> S2 ---- ---> S3 > >or > > ---> S1 ---- ---> S2 ---- ----> S3 > >In the first case external events take care of all the transitioning. >But in the second case the tranisitoning events must be generated >internally by the state machine. S2 can't generate the event to S3 >because that would be premature in the first case and it cannot know how >the FSM came to be in S2. Therefore, S1 has to generate both events I'll concede that there may be some situations in which generating two self-directed events from the same state is useful. However, I think that it can almost always be modeled some other way. In the example above, the fact that the state model needs to wait in one case, and not in another makes me think that there are really two different states for S2. One of these states is completely transitional, and always generates an internal event to go to state 3. This state is only entered by a self-directed event from state S1. The other S2 state always waits for an external event. So it becomes: ---> S2a ---- ---> S3 or ---> S1 ---- ---> S2b ---- ----> S3 If the actions of both of the S2 states are mostly the same, I would create a synchronous service that is called from both states. In fact, it might be possible to eliminate the transitional S2 state entirely by calling the synchronous service from S1 and then generating the self-directed event to go directly to state S3. Bary Hogan LMTAS Subject: Re: (SMU) Prioritising Events. lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Hogan... > I'll concede that there may be some situations in which generating two > > self-directed events from the same state is useful. However, I think > that it can almost always be modeled some other way. > > In the example above, the fact that the state model needs to wait in > one case, and not in another makes me think that there are really two > different states for S2. A quibble, but it is not so much a question of needing to wait as it is a question of how the flow of control works. In the first case the flow of processing in hand is being dictated by be external state machines or domains; there is a continuing interaction. In the second case no external entities are involved except for the initial event. I agree that S3 is likely to be some sort of Ready state in practice, but this is not necessarily so. > One of these states is completely > transitional, and always generates an internal event to go to state 3. > > This state is only entered by a self-directed event from state S1. > The other S2 state always waits for an external event. So it becomes: > > ---> S2a ---- ---> S3 > > or > > ---> S1 ---- ---> S2b ---- ----> S3 I agree, there are ways to avoid the multiple self-directed events. However, I don't care for this particular solution because the S2a and S2b states have identical actions except for the generated event. I see that as rather artificial and being driven by a desire to use synchronous calls in the architecture. If synchronous calls are not used in the architecture, then the multiple self-directed events from the same action can easily be handled correctly. It also clutters the STD which I tend to regard as very precious real estate because when debugging I want the entire STD to fit on one 11x14 sheet and be readable (defined as 12 pt type for action descriptions for those of us in the Bifocal Generation). > If the actions of both of the S2 states are mostly the same, I would > create a synchronous service that is called from both states. In > fact, it might be possible to eliminate the transitional S2 state > entirely by calling the synchronous service from S1 and then > generating the self-directed event to go directly to state S3. Though the actions are the same there is no guarantee that you can use a synchronous service. In the example I used separately to Munday, the actions are likely to be a sequence of bridge events. I would regard hiding those in a synchronous service as being bad form, even if the tool allowed it, because they would be crucial to the level of abstraction for the domain (i.e., in defining the flow of control). -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: (SMU) issues concerning SMALL "Eric V. Smith" writes to shlaer-mellor-users: -------------------------------------------------------------------- I've been reading the document on projtech.com describing the action language SMALL, and I have a few issues. I'm new to all of this, so if I make a fool of myself, please be gentle! If this has been discussed before, I could not find it in the archives. First, on page 24 is the section titled "Failed Instance Creation". It describes how to detect if instance creation will fail due to duplicate identifying attributes. Surely there must be a better way to detect this than testing first. I have two problems here: 1) You have to maintain the statement that creates the identifying attributes in two places, and 2) If the work needed to create the attributes is expensive you have to do it twice. Would it be possible to have the creation of an instance optionally be similar to a test process, and set a value that could be acted upon? Or maybe something similar to C++ exceptions? Second, I have an issue with "Repeated Access" on page 22. As it says in the first paragraph "for the rest of the action or synchronous service (or until a different set of references to the same object is established), the name of the object followed by an empty set of parenthesis refers to that set of references". I can see situations where I need multiple sets of references to the same object, in particular where I have reflexive relationships. Would it be possible or desirable to provide an optional name to a set of references, so that multiple references could be re-used? Third, page 4 mentions that the language is terse, which is certainly true, and hints of future work on an alternative verbose form. Has there been any discussion or publication of such a verbose form? Thanks. Eric. Subject: Re: (SMU) issues concerning SMALL lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Smith... > First, on page 24 is the section titled "Failed Instance Creation". > It describes how to detect if instance creation will fail due to > duplicate identifying attributes. Surely there must be a better way > to > detect this than testing first. I have two problems here: 1) You have > > to maintain the statement that creates the identifying attributes in > two places, and 2) If the work needed to create the attributes is > expensive you have to do it twice. Would it be possible to have the > creation of an instance optionally be similar to a test process, and > set a value that could be acted upon? Or maybe something similar to > C++ exceptions? I think you may be reading in more than their intent. If you were doing an ADFD and there was a chance that there already was an instance, you would have to do a similar operation: try to Find the existing instance using the proposed identifiers and check if you did actually find one. I believe they are simply providing a syntax for doing that. As far as maintaining the identifying attributes is concerned, I believe the local variable, ~BenchNumber, is handling this in the same manner that two data flows would do it in the ADFD. That is, the attributes are still defined in one place. In the ADFD they would be sent out on two flows: one to the Find and one to the Create. In SMALL they are stored as transient data that is accessed in multiple places. > Second, I have an issue with "Repeated Access" on page 22. As it says > > in the first paragraph "for the rest of the action or synchronous > service (or until a different set of references to the same object is > established), the name of the object followed by an empty set of > parenthesis refers to that set of references". I can see situations > where I need multiple sets of references to the same object, in > particular where I have reflexive relationships. Would it be possible > > or desirable to provide an optional name to a set of references, so > that multiple references could be re-used? I agree there is potentially an ambiguity here. However, one can eliminate the ambiguity by assigning the set to a reference variable. I believe this would be the preferred mechanism if one needs the set in different statements -- it would avoid subsequent miscues during maintenance. Some might regard the notation as described to be handy within a single statement, as in the example. > Third, page 4 mentions that the language is terse, which is certainly > true, and hints of future work on an alternative verbose form. Has > there been any discussion or publication of such a verbose form? You know as much as the rest of us at this point. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) issues concerning SMALL Steve Mellor writes to shlaer-mellor-users: -------------------------------------------------------------------- At 10:20 AM 2/9/98 -0500, you wrote: >lahman writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >Responding to Smith... > >> First, on page 24 is the section titled "Failed Instance Creation". >> It describes how to detect if instance creation will fail due to >> duplicate identifying attributes. Surely there must be a better way >> to >> detect this than testing first. I have two problems here: 1) You have >> >> to maintain the statement that creates the identifying attributes in >> two places, and 2) If the work needed to create the attributes is >> expensive you have to do it twice. Would it be possible to have the >> creation of an instance optionally be similar to a test process, and >> set a value that could be acted upon? Or maybe something similar to >> C++ exceptions? > >I think you may be reading in more than their intent. If you were doing >an ADFD and there was a chance that there already was an instance, you >would have to do a similar operation: try to Find the existing instance >using the proposed identifiers and check if you did actually find one. >I believe they are simply providing a syntax for doing that. HS is correct. The language-supplied process isNone? simply checks to see if there are any instances. Then you can decide what to do. > >As far as maintaining the identifying attributes is concerned, I believe >the local variable, ~BenchNumber, is handling this in the same manner >that two data flows would do it in the ADFD. That is, the attributes >are still defined in one place. In the ADFD they would be sent out on >two flows: one to the Find and one to the Create. In SMALL they are >stored as transient data that is accessed in multiple places. > >> Second, I have an issue with "Repeated Access" on page 22. As it says >> >> in the first paragraph "for the rest of the action or synchronous >> service (or until a different set of references to the same object is >> established), the name of the object followed by an empty set of >> parenthesis refers to that set of references". I can see situations >> where I need multiple sets of references to the same object, in >> particular where I have reflexive relationships. Would it be possible >> >> or desirable to provide an optional name to a set of references, so >> that multiple references could be re-used? > >I agree there is potentially an ambiguity here. However, one can >eliminate the ambiguity by assigning the set to a reference variable. I >believe this would be the preferred mechanism if one needs the set in >different statements -- it would avoid subsequent miscues during >maintenance. Some might regard the notation as described to be handy >within a single statement, as in the example. HS is again right that you can assign the reference colection to a reference variable. I have to tell you tho' that it makes me nervous because all of these intermediate variables make it harder for the translation engine. >> Third, page 4 mentions that the language is terse, which is certainly >> true, and hints of future work on an alternative verbose form. Has >> there been any discussion or publication of such a verbose form? > >You know as much as the rest of us at this point. As I indicated earlier, we are presently working with the OMG to define a standard action language THAT WILL BE TRANSLATABLE. (That's a requirement!) The Object Constraint Language (OCL) is an adopted technology that complements the UML. It has a general (and rather nice) syntax for data access that is more verbose than SMALL. We will probably want to use that instead of SMALL syntax though there is another language, CDL, that we must consider. I expect to drive development of SMALL via the OMG process. (That is, we will submit a modified SMALL to the OMG.) Amongst those modifications, we will need to make the language more verbose--though I still would prefer to keep the language 'small'. Try writing SMALL on a state-action box! -- steve mellor Subject: Re: (SMU) issues concerning SMALL David Stone writes to shlaer-mellor-users: -------------------------------------------------------------------- I may have misunderstood this, so my apologies if I have. If I have understood rightly, it is proposed that the new action language only provides a way of testing whether an instance with given identifying attributes exists before creation. However, such an approach makes things very difficult if you adopt the concurrent interpretation of time. Two independent active instances might run as follows: [in instance 1] check whether obj A id 1 exists [in instance 2] ...................... check whether obj A id 1 exists [in instance 2] ...................... no, so create obj A with id 1 [in instance 1] no, so create obj A with id 1 Thus two As exist both with id 1 - an error. It is very hard to make the architecture add appropriate locking to prevent this unless the test and creation accessor are combined into one construct in the action language, since arbitrary action statements may intervene between the test and creation accessor. Please, Steve or whoever, make sure that whatever solution you adopt is workable under both interpretations of time. -- David Stone Sent by courtesy of, but not an official communication from: Simoco Europe, P.O.Box 24, St Andrews Rd, CAMBRIDGE, CB4 1DP, UK Subject: Re: (SMU) issues concerning SMALL MiVock@aol.com writes to shlaer-mellor-users: -------------------------------------------------------------------- In a message dated 98-02-18 11:41:55 EST, you write: << >> Second, I have an issue with "Repeated Access" on page 22. As it says >> >> in the first paragraph "for the rest of the action or synchronous >> service (or until a different set of references to the same object is >> established), the name of the object followed by an empty set of >> parenthesis refers to that set of references". I can see situations >> where I need multiple sets of references to the same object, in >> particular where I have reflexive relationships. Would it be possible >> >> or desirable to provide an optional name to a set of references, so >> that multiple references could be re-used? > >I agree there is potentially an ambiguity here. However, one can >eliminate the ambiguity by assigning the set to a reference variable. I >believe this would be the preferred mechanism if one needs the set in >different statements -- it would avoid subsequent miscues during >maintenance. Some might regard the notation as described to be handy >within a single statement, as in the example. HS is again right that you can assign the reference colection to a reference variable. I have to tell you tho' that it makes me nervous because all of these intermediate variables make it harder for the translation engine. >> Here's a wacked out, insane idea that I'm sure someone else has had on this ambiguity issue. What if the following syntax were supported by SMALL (forgive transgressions in the SMALL syntax, I'm still learning): SomeObject->[.reflexive_relationship]SomeObject { SomeObject().attribute < 0; // set the attribute for the "current" instance of SomeObject // (the parens are optional) SomeObject(..).attribute < 1; // set the attribute for the previous instance of SomeObject } You could go really nuts and do SomeObject->[.reflexive_relationship]SomeObject->[.reflexive_relationship]S ome Object { SomeObject().attribute < 0; // set the attribute for the "current" instance of SomeObject SomeObject(..).attribute < 1; // set the attribute for the previous instance of SomeObject SomeObject(../..).attribute < 2; // set the attribute for the previous- previous instance of SomeObject } There must be some parsing problem here. Fire when ready!!! Mike Vock SRA International (formerly w/Abbott Labs) Subject: Re: (SMU) issues concerning SMALL lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Stone... > If I have understood rightly, it is proposed that the new action > language only provides a way of testing whether an instance with given > identifying attributes exists before creation. However, such an > approach makes things very difficult if you adopt the concurrent > interpretation of time. Two independent active instances might run as > follows: > > [in instance 1] check whether obj A id 1 exists > > [in instance 2] ...................... check whether obj A id 1 exists > > [in instance 2] ...................... no, so create obj A with id 1 > > [in instance 1] no, so create obj A with id 1 > > Thus two As exist both with id 1 - an error. > > It is very hard to make the architecture add appropriate locking to > prevent this unless the test and creation accessor are combined into > one construct in the action language, since arbitrary action > statements may intervene between the test and creation accessor. I would argue an even stronger case: the architecture cannot guarantee data integrity optimally unless the test and create are combined. To preserve relational integrity the architecture would have to (a) lock against what might be rather than what is, and (b) it would have to effectively understand the intent of the test in instance 2 in order to lock out instance 2's action until instance 1's action completed. Even if there were some foolproof way to do all this, the key involved in instance 2 could have a value determined only at run-time, so the architecture would effectively have to simulate instance 2 on the fly to figure out what it _might_ do prior to letting the action actually execute. I don't think so. You could probably make some simplifying assumptions, like locking out any action that referenced any A as soon as instance 1's action starts, but the potential for deadlock and the performance hits would probably defeat the purpose of using the simultaneous view in the first place. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) issues concerning SMALL lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Mellor... > HS is again right that you can assign the reference colection to a > reference variable. I have to tell you tho' that it makes me nervous > because all of these intermediate variables make it harder for the > translation engine. I don't see why this should be a problem for the translator. Variable declarations could be handled a la BASIC; the typing is pretty deterministic. The guard syntax explicitly handles the scoping of the _values_ of the intermediate variable, so aliasing conflicts are not a problem. It seems to me that the problems are, at most, no worse than for any run-of-the-mill interpreter. It would be trivial for a multi-pass compiler. And if you are going for optimization, the translator would have to make multiple passes anyway. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) issues concerning SMALL David Yoakley writes to shlaer-mellor-users: -------------------------------------------------------------------- MiVock@aol.com wrote: -- snip -- SomeObject->[.reflexive_relationship]SomeObject { SomeObject().attribute < 0; // set the attribute for the "current" instance of SomeObject // (the parens are optional) SomeObject(..).attribute < 1; // set the attribute for the previous instance of SomeObject } -- snip -- I can certainly see where Mike's semantic helps with the operations on these pesky (but all too common) reflexive relationships. I guess we could argue for a special semantic to access next from the current context as well (e.g. maybe I don't always want to go to an instance an look back, maybe I want to look forward as well). For those of us that haven't really dusted off the SMALL spec. yet, how would you go about a simple operation such as traversing the set of instances in a reflexive relationship (e.g. reflexive relationship on the many side of another relationship) and inserting, deleting, or combining instances. Lets say for example that we want to return a chunk of memory to a heap. The memory chunks are ordered by starting address so basically I am inserting in the chain. To make it a little more interesting, I will probably want to combine memory chunks that are adjacent. David Yoakley > > > SomeObject->[.reflexive_relationship]SomeObject > { > SomeObject().attribute < 0; // set the attribute for the "current" > instance of SomeObject > // (the parens are > optional) > SomeObject(..).attribute < 1; // set the attribute for the previous > instance of SomeObject > } -- ________________________________________________________________________ David Yoakley Objective Innovations Inc. Phone: 512.288.5954 14606 Friendswood Lane Fax: 512.288.6381 Austin, TX 78737 Replies: yoakley@oiinc.com ________________________________________________________________________ Did you ever notice when you blow in a dog's face he gets mad at you? But when you take him in a car he sticks his head out the window! Steve Bluestone Subject: RE: (SMU) issues concerning SMALL "Eric V. Smith" writes to shlaer-mellor-users: -------------------------------------------------------------------- While I like the concept here, I think the syntax could be improved by using names or labels and not by relative positions. Instead of using the ".." syntax to move up to previous saved contexts, I'd suggest naming them. Then you could have something like (using your example): SomeObject:a->[.reflexive_relationship]SomeObject:b->[.reflexive_rel ationship]Some Object:c { SomeObject:c().attribute < 0; // set the attribute for the "current" instance of SomeObject SomeObject:b().attribute < 1; // set the attribute for the previous instance of SomeObject SomeObject:a().attribute < 2; // set the attribute for the previous- previous instance of SomeObject } This is using the hypothetical syntax ":label" to name a context. The primary advantages are clarity and protection from silent changes. If your first example (with one reflexive relationship) were changed to the second one (with 2 such relationships), then the ".." notation might refer to the wrong context if the newly added object were inserted between the existing ones. With names (instead of relative positions), you are protected against this. Eric. -----Original Message----- From: MiVock@aol.com [SMTP:MiVock@aol.com] Sent: Wednesday, February 18, 1998 3:53 PM To: shlaer-mellor-users@projtech.com Subject: Re: (SMU) issues concerning SMALL MiVock@aol.com writes to shlaer-mellor-users: -------------------------------------------------------------------- In a message dated 98-02-18 11:41:55 EST, you write: << >> Second, I have an issue with "Repeated Access" on page 22. As it says >> >> in the first paragraph "for the rest of the action or synchronous >> service (or until a different set of references to the same object is >> established), the name of the object followed by an empty set of >> parenthesis refers to that set of references". I can see situations >> where I need multiple sets of references to the same object, in >> particular where I have reflexive relationships. Would it be possible >> >> or desirable to provide an optional name to a set of references, so >> that multiple references could be re-used? > >I agree there is potentially an ambiguity here. However, one can >eliminate the ambiguity by assigning the set to a reference variable. I >believe this would be the preferred mechanism if one needs the set in >different statements -- it would avoid subsequent miscues during >maintenance. Some might regard the notation as described to be handy >within a single statement, as in the example. HS is again right that you can assign the reference colection to a reference variable. I have to tell you tho' that it makes me nervous because all of these intermediate variables make it harder for the translation engine. >> Here's a wacked out, insane idea that I'm sure someone else has had on this ambiguity issue. What if the following syntax were supported by SMALL (forgive transgressions in the SMALL syntax, I'm still learning): SomeObject->[.reflexive_relationship]SomeObject { SomeObject().attribute < 0; // set the attribute for the "current" instance of SomeObject // (the parens are optional) SomeObject(..).attribute < 1; // set the attribute for the previous instance of SomeObject } You could go really nuts and do SomeObject->[.reflexive_relationship]SomeObject->[.reflexive_relatio nship]Some Object { SomeObject().attribute < 0; // set the attribute for the "current" instance of SomeObject SomeObject(..).attribute < 1; // set the attribute for the previous instance of SomeObject SomeObject(../..).attribute < 2; // set the attribute for the previous- previous instance of SomeObject } There must be some parsing problem here. Fire when ready!!! Mike Vock SRA International (formerly w/Abbott Labs) Subject: RE: (SMU) issues concerning SMALL "Vock, Michael" writes to shlaer-mellor-users: -------------------------------------------------------------------- I prefer Eric's "labelled" context far more than the ".." thing. How about an opinion from the decision makers? Mike >---------- >From: Eric V. Smith[SMTP:EricSmith@windsor.com] >Sent: Thursday, February 19, 1998 9:03 PM >To: 'shlaer-mellor-users@projtech.com' >Subject: RE: (SMU) issues concerning SMALL > >"Eric V. Smith" writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >While I like the concept here, I think the syntax could be improved by >using names or labels and not by relative positions. Instead of using >the ".." syntax to move up to previous saved contexts, I'd suggest >naming them. Then you could have something like (using your example): > >SomeObject:a->[.reflexive_relationship]SomeObject:b->[.reflexive_rel >ationship]Some >Object:c >{ > SomeObject:c().attribute < 0; // set the attribute for the >"current" >instance of SomeObject > SomeObject:b().attribute < 1; // set the attribute for the >previous >instance of SomeObject > SomeObject:a().attribute < 2; // set the attribute for the >previous- >previous instance of SomeObject >} > >This is using the hypothetical syntax ":label" to name a context. The >primary advantages are clarity and protection from silent changes. If >your first example (with one reflexive relationship) were changed to >the second one (with 2 such relationships), then the ".." notation >might refer to the wrong context if the newly added object were >inserted between the existing ones. With names (instead of relative >positions), you are protected against this. > >Eric. > > >-----Original Message----- >From: MiVock@aol.com [SMTP:MiVock@aol.com] >Sent: Wednesday, February 18, 1998 3:53 PM >To: shlaer-mellor-users@projtech.com >Subject: Re: (SMU) issues concerning SMALL > >MiVock@aol.com writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >In a message dated 98-02-18 11:41:55 EST, you write: > ><< >> Second, I have an issue with "Repeated Access" on page 22. As it >says > >> > >> in the first paragraph "for the rest of the action or synchronous > >> service (or until a different set of references to the same object >is > >> established), the name of the object followed by an empty set of > >> parenthesis refers to that set of references". I can see >situations > >> where I need multiple sets of references to the same object, in > >> particular where I have reflexive relationships. Would it be >possible > >> > >> or desirable to provide an optional name to a set of references, so > >> that multiple references could be re-used? > > > >I agree there is potentially an ambiguity here. However, one can > >eliminate the ambiguity by assigning the set to a reference variable. > I > >believe this would be the preferred mechanism if one needs the set in > >different statements -- it would avoid subsequent miscues during > >maintenance. Some might regard the notation as described to be >handy > >within a single statement, as in the example. > > HS is again right that you can assign the reference colection to a > reference variable. I have to tell you tho' that it makes me nervous > because all of these intermediate variables make it harder for the > translation engine. > >> > >Here's a wacked out, insane idea that I'm sure someone else has had on >this >ambiguity issue. What if the following syntax were supported by SMALL >(forgive >transgressions in the SMALL syntax, I'm still learning): > >SomeObject->[.reflexive_relationship]SomeObject >{ > SomeObject().attribute < 0; // set the attribute for the >"current" >instance of SomeObject > // (the parens are >optional) > SomeObject(..).attribute < 1; // set the attribute for the >previous >instance of SomeObject >} > >You could go really nuts and do > >SomeObject->[.reflexive_relationship]SomeObject->[.reflexive_relatio >nship]Some >Object >{ > SomeObject().attribute < 0; // set the attribute for the >"current" >instance of SomeObject > SomeObject(..).attribute < 1; // set the attribute for the >previous >instance of SomeObject > SomeObject(../..).attribute < 2; // set the attribute for the >previous- >previous instance of SomeObject >} > >There must be some parsing problem here. Fire when ready!!! > >Mike Vock >SRA International (formerly w/Abbott Labs) > > Subject: Re: (SMU) issues concerning SMALL David Yoakley writes to shlaer-mellor-users: -------------------------------------------------------------------- I see what I missed before. Mike's solution gives access to each context that had been opened by the traversal. In that case, forget my dumb question about referring to "next". I do like the named context better than the ".." syntax. New topic: What I was hoping to get at with my previous heap example was the issue of iterating over a reflexive relationship. This is highly recursive so I am wondering if in such cases it is expected that a self-directed event be used to recurse the current state or if there would be some iteration in the language. David Eric V. Smith wrote: > "Eric V. Smith" writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > While I like the concept here, I think the syntax could be improved by > using names or labels and not by relative positions. Instead of using > the ".." syntax to move up to previous saved contexts, I'd suggest > naming them. Then you could have something like (using your example): > > SomeObject:a->[.reflexive_relationship]SomeObject:b->[.reflexive_rel > ationship]Some > Object:c > { > SomeObject:c().attribute < 0; // set the attribute for the > "current" > instance of SomeObject > SomeObject:b().attribute < 1; // set the attribute for the > previous > instance of SomeObject > SomeObject:a().attribute < 2; // set the attribute for the > previous- > previous instance of SomeObject > } Subject: Re: (SMU) issues concerning SMALL Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- David Yoakley wrote: > What I was hoping to get at with my previous heap example was the > issue of iterating over a reflexive relationship. This is highly > recursive so I am wondering if in such cases it is expected that a > self-directed event be used to recurse the current state or if > there would be some iteration in the language. I suggest that you forget all this iteration stuff and just state what you want. If I understand the problem correctly, you have a set of free chunks, defined by FREE_CHUNK(*start_address, *end address); and you want to add a block to it, merging as necessary. I will assume that there is no overlap. Assuming you want to add (~block_start, ~block_end), we have: // get ang existing chunk that touches the new block FREE_CHUNK(one, end_address=~block_start-1) > pre_chunk; FREE_CHUNK(one, start_address=~block_end+1) > post_chunk; // work out if they existed pre_chunk | None? !no_pre, !merge_start; post_chunk | None? !no_post, !merge_end; // now do one of 4 things, depending on result of the tests. !merge_start: !no_post: ~block_end > pre_chunk(end_address); !merge_end: !no_pre: ~block_start > post_chunk(start_address); !no_pre: !no_post: (~block_start, ~block_end) >> FREE_CHUNK(...); !merge_start: !merge_end: [ post_chunk(end_address) > ; ~post_end > pre_chunk(end_address); << post_chunk; ] This will work if you don't have to worry about relationships. If you do have a relationship between chunks, then you'll need to be a bit more careful when you create and delete chunks. Personally, I'd say its an architectureal decision whether to store the instances as a linked list and/or sort them by their addresses. Dave. Not speaking for Mitel Semiconductor Ltd. -- Dave Whipp, Embedded Systems Group, Mitel Semiconductor, Plymouth, United Kingdom. PL6 7BQ. tel. +44 (0)1752 693277 mailto:david.whipp@gpsemi.com fax. +44 (0)1752 693306 http://www.gpsemi.com Subject: Re: (SMU) issues concerning SMALL David Yoakley writes to shlaer-mellor-users: -------------------------------------------------------------------- Dave Whipp wrote: > > > This will work if you don't have to worry about relationships. > If you do have a relationship between chunks, then you'll need > to be a bit more careful when you create and delete chunks. > I am so glad you asked that. You have gotten to a more fundamental question. Does the fact that I am maintaining an order of the instances (by address) imply that there is a relationship there? I am really interested in clearly delineating the correct usage of reflexive relationships (the memory example was contrived to that end) so I don't want to get too focused on the heap example. dy ________________________________________________________________________ David Yoakley Objective Innovations Inc. Phone: 512.288.5954 14606 Friendswood Lane Fax: 512.288.6381 Austin, TX 78737 Replies: yoakley@oiinc.com ________________________________________________________________________ Did you ever notice when you blow in a dog's face he gets mad at you? But when you take him in a car he sticks his head out the window! Steve Bluestone Subject: Re: (SMU) issues concerning SMALL lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- REsponding to Yoakley... > I am so glad you asked that. You have gotten to a more fundamental > question. Does the fact that I am maintaining an order of the > instances (by address) imply that there is a relationship there? > I am really interested in clearly delineating the correct usage of > reflexive relationships (the memory example was contrived to that end) > so I don't want to get too focused on the heap example. I am confused by the context. I thought this thread started with a syntax for navigating reflexive relationships. Are you suggesting that because the addresses are ordered there should be a _second_ relationship? Or have you branched away from the notation discussion to the more general issue of whether manipulating addresses in a manner that depends upon their order implies a reflexive relationship. For the former I would definitely come down on the side of That Depends -- on what the original relationship described. For the latter, I would argue that the relationship is necessary if the address order is used explicitly in the SMALL code. If the SMALL code depends upon the order, then the order is meaningful at the OOA level of abstraction and the dependence should be reflected in IM with a relationship. But this would raise an interesting issue if SMALL included support for some of the things I would like to see, like ordered sets. If one can order a set of instances locally in an action, does this imply that there should be a conditional reflexive relationship in the IM to reflect this? I would hope not because it could get klutzy real quick if the same set could be ordered several different ways. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) issues concerning SMALL David Yoakley writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > lahman writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > REsponding to Yoakley... > > > I am so glad you asked that. You have gotten to a more fundamental > > question. Does the fact that I am maintaining an order of the > > instances (by address) imply that there is a relationship there? > > I am really interested in clearly delineating the correct usage of > > reflexive relationships (the memory example was contrived to that end) > > so I don't want to get too focused on the heap example. > > I am confused by the context. I thought this thread started with a syntax > for navigating reflexive relationships. Are you suggesting that because the > addresses are ordered there should be a _second_ relationship? Or have you > branched away from the notation discussion to the more general issue of > whether manipulating addresses in a manner that depends upon their order > implies a reflexive relationship. I apologize. I am still interested in hearing opinions about iterating overreflexive relationships. And I did effectively fork a new thread. So let me summarize the issues that I am questioning at this moment. 1. Is it valid for a domain to explicitly iterate over a reflexive relationship and if so how is this done in SMALL? 2. Does the maintenance of instance order imply a reflexive relationship? Dave's response I think showed that my example did not require iteration and in fact did not require ordering of instances. I actually liked his response quite a bit but wanted to press on and see if we could flush out any additional observations from other folks that are more comfortable with reflexive relationships than I. > > > For the former I would definitely come down on the side of That Depends -- > on what the original relationship described. For the latter, I would argue > that the relationship is necessary if the address order is used explicitly > in the SMALL code. If the SMALL code depends upon the order, then the order > is meaningful at the OOA level of abstraction and the dependence should be > reflected in IM with a relationship. I agree. And in the heap example, there does not seem to be persistent order. I originally thought there was but as Dave's SMALL code shows, there is no order among the free blocks. There is only a relationship between blocks that *might* exist and a candidate new block. Interesting enough, if such relationship exists then the two or three blocks gets combined such that no relationship among *existing* instances ever materializes. > > > But this would raise an interesting issue if SMALL included support for some > of the things I would like to see, like ordered sets. If one can order a > set of instances locally in an action, does this imply that there should be > a conditional reflexive relationship in the IM to reflect this? I would > hope not because it could get klutzy real quick if the same set could be > ordered several different ways. > > Can you recall some of the cases where you needed ordered sets? I amcurious to see how these examples hold up under scrutiny. dy ________________________________________________________________________ David Yoakley Objective Innovations Inc. Phone: 512.288.5954 14606 Friendswood Lane Fax: 512.288.6381 Austin, TX 78737 Replies: yoakley@oiinc.com ________________________________________________________________________ Did you ever notice when you blow in a dog's face he gets mad at you? But when you take him in a car he sticks his head out the window! Steve Bluestone Subject: Re: (SMU) issues concerning SMALL lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Yoakley... > 1. Is it valid for a domain to explicitly iterate over a reflexive relationship > and if so > how is this done in SMALL? I believe so. I was pushing for depth-first iteration for awhile before OOA96. However, the only convincing examples I could come up with all involved ordered sets. In those cases there would be a iteration over the reflexive relationship. > 2. Does the maintenance of instance order imply a reflexive relationship? I believe it does, for the reasons I gave. However, the presence of a reflexive relationship would not imply order (e.g., sibling-of, except in cultural stereotypes where sons are numbered). > Can you recall some of the cases where you needed ordered sets? I amcurious to > see how these examples hold up under scrutiny. I just happen to have one from the last domain I worked upon. Consider a client who issues commands to your application and expects an answer back that is specific to each command. If the commands need to be buffered, then the messages must be stored and sent back after execution of all the commands. If the client retrieves the messages via a get-next mechanism and expects the messages to come back in the same order as the original commands were issued, the idea of an ordered message set leaps to mind. As it happens, we create a message instance with each command and assign our own increasing message number as an identifier. When we are told to execute we get the results and update the message data. (We effectively instantiated a 1:1 between message and command so we can find the message to fill in when we process results for the command.) We order the message set after all the results have been processed and peel-and-delete them as the client makes each get-next request. (Our tool's action language supports ordered sets.) Do we _need_ an ordered set? Clearly not in this case. We could do it by simply doing a Find for the next higher number we assigned. I will go further and speculate that this is generally true. However, I don't think that is a very natural way to do it. In the problem space the set is clearly ordered in a meaningful way and one should be able to express that. If a ordered set can be expressed, then you should be able to operate upon it as an ordered set (e.g., get-first, get-next, etc.). In this example the get-next request is the way the application's client thinks about it, so why shouldn't the application think about it that way? The main reason, though, that I think the ordered set is essential is for performance. Creating the ordered set is an operation of O(N), at worst, if it is done as messages are created while the Find approach is a sequence of N operations approaching O(NlogN) in total. And the only way O(NlogN) can be achieved is if the architecture orders the set; for a true unordered set the cost would be O(N*N). It would be very difficult to colorize an automatic code generator to Do the Right Thing. And even if you could, what will the architecture do in response to that colorization? It will order the set on instance creation and extract the elements via get-next, just like the natural way to do it in the OOA! Bottom line: I don't think we absolutely need ordered sets, but I think it is unnecessarily difficult to live without them. The issue for me is that I don't see any way that rigor is lost by introducing ordered sets. If RDBMSes can live with indexes, I think S-M can live with ordered sets. If no rigor is lost, then why not support a more natural problem expression (in some cases) that also makes life much easier for the architecture? -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) issues concerning SMALL Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- David Yoakley wrote: > 1. Is it valid for a domain to explicitly iterate over a reflexive > relationship and if so how is this done in SMALL? In SMALL, as currently defined, it is not easy to see how a single action can iterate over a relationship. It is obviously possible to do so using multiple states; and I will argue (below) that this is a natural way to model the types of problem where such iteration is required. > 2. Does the maintenance of instance order imply a reflexive > relationship? It depends. If the order is explicitly used (in a succ/pred sense) then a refelxive relationship is probably appropriate. However, there are two caveats: 1. Reflexive relationships should be used with care. Detailed exploration of the problem domain will often lead to the replacement of the reflexive relationship with a "node" object. I.e. the use of a reflexive relationship for ordering is a special case of the more general problem of positioning an instnace within a network (if arcs always link exactly 2 nodes, then a reflecive relationship may be useable). 2. The PT paper on Types within OOA introduced the concept of the ordered type, where ordering is based on a network. In this paper, it was suggested that the use of these types could allow some relationships to be omitted. I think there is a value judjement to be made: is the use of an ordered data type for an object's identifer sufficient; or is an explicit relationship required. > ... Lahman gave an example of the use of ordered sets in a buffered client-message-server scenario. His example asserted that a client would create an ordered list of messages which a server would service asynchronously - placing the result in the message. Once all the messages were processed, the client reads back the results using a get-first/get-next mechanism. As I mentioned above, I do not feel that this scenario justifies the addition of orderd sets with get-first/get-next operations is a primary mechanism within OOA. At most, it would be a service domain. However, I'd like to pull the example apart a bit. The scenario suggests that there are two active objects, which communicate using a set of passive message objects. I beleive this is incorrect: the message itself has a lifecycle. Some of its states are quite obvious. It is created by the client with a command for the server (state 1 - waiting for server). At some point in the future, the server has provided a response (state 2 - ready for client). A bit later, the client will want to process that response (state 3 - being processed by client). When the client has finished, the message is no longer needed (state 4 - termination). You can argue over the details, but it is clear that a lifecycle exists. It is difficult to see how this type of example can be formulated to exclude the active message. Given that we now have 3 active objects (or object clusters), we can think a bit more about assigning responsibility. With a passive message object, it seems wrong to use an event for iteration over messages within the client. With an active message, however, it becomes natural for the client to issue an event when it has finished processing a message. The event would be recieved by a message: causing it to be deleted. At the same time, the message can be responsible for causing the client to be sent the next message. By moving responsibility for iteration away from the client, and into the message, the state actions of the client are simplified - hopefully resulting in a more maintainable model. It is wrong to think that this shift will result in less efficient implementation. The idea that events are less efficient than functions is a result of rather naive architectures, Dave. Not speaking for Mitel Semiconductor. -- Dave Whipp, Embedded Systems Group, Mitel Semiconductor, Plymouth, United Kingdom. PL6 7BQ. tel. +44 (0)1752 693277 mailto:david.whipp@gpsemi.com fax. +44 (0)1752 693306 http://www.gpsemi.com Subject: Re: (SMU) issues concerning SMALL Gregg Kearnan writes to shlaer-mellor-users: -------------------------------------------------------------------- Dave Whipp wrote: > > The idea that events are less efficient than functions is a result of > rather naive architectures, I'm curious, what is your idea of an architecture that is not naive? -- ************************************************************** * Gregg Kearnan Phone: 603-625-4050 x2557 * * Summa Four, Inc Fax: 603-668-4491 * * email: kearnan@summa4.com * ************************************************************** Subject: Re: (SMU) issues concerning SMALL Tim Brockwell writes to shlaer-mellor-users: -------------------------------------------------------------------- Dave Whipp wrote: -------------------------------------------------------------- > > David Yoakley wrote: > > > 2. Does the maintenance of instance order imply a reflexive > > relationship? > > It depends. If the order is explicitly used (in a succ/pred sense) > then a refelxive relationship is probably appropriate. However, there > are two caveats: ... > > Lahman gave an example of the use of > ordered sets in a buffered client-message-server scenario. His > example asserted that a client would create an ordered list of > messages which a server would service asynchronously - placing the > result in the message. Once all the messages were processed, the > client reads back the results using a get-first/get-next mechanism. > > As I mentioned above, I do not feel that this scenario justifies the > addition of orderd sets with get-first/get-next operations is a > primary mechanism within OOA. At most, it would be a service domain. > > ... The scenario suggests that there are two active objects, which communicate > using a set of passive message objects. I beleive this is incorrect: the > message itself has a lifecycle. > > Some of its states are quite obvious. It is created by the client > with a command for the server (state 1 - waiting for server). At some > point in the future, the server has provided a response (state 2 - > ready for client). A bit later, the client will want to process that > response (state 3 - being processed by client). When the client has > finished, the message is no longer needed (state 4 - termination). > > You can argue over the details, but it is clear that a lifecycle > exists. It is difficult to see how this type of example can be > formulated to exclude the active message. > > Given that we now have 3 active objects (or object clusters), we can > think a bit more about assigning responsibility. > > With a passive message object, it seems wrong to use an event for > iteration over messages within the client. With an active message, > however, it becomes natural for the client to issue an event when it > has finished processing a message. The event would be recieved by a > message: causing it to be deleted. At the same time, the message can > be responsible for causing the client to be sent the next message. > ... I've modeled this type of behavior in a missile launch control system using 2 active objects and an assigner. In that system, a missile on the launch pad communicates asynchronously with the launch control system via MILSTD-1553 messages. The launch controller (at times) has no idea that a message is on the way from the missile, but must be prepared to process any one of several that may be received unsolicited. For this model, I used an active Message object, a passive Message_Update object, and an assigner that controls access to the "current content" of the Message object. In this context, I think of the Message object as the "persistent" Message. It contains the last associated information that was received from an external source; it is essentially what a Message really is to the other objects in the domain. Any object that needs to access the current or next available content of the Message must "register" for it somehow, e.g. by setting a wait_status attribute to true. The Message object, by definition, always holds the most recent "version" of some predefined system message. A Message_Update object is created whenever an incoming message is received, i.e. is ready to be processed by Dave's "client". As messages are received, new Message_Update instances are created. Zero or more Message_Update objects may exist at any given time, of course, so Message_Update.update_number gets incremented at instantiation time to maintain the sequential information required to process each incoming message in the correct order. The assigner provides read access to any objects that have registered for the Message before the Message's content may be rewritten, by the assigner, with the Message_Update's content. If no objects are waiting to read the Message, the assigner checks for any pending updates to the Message. The assigner determines which update number is expected next by reading Message.last_update_number. If a Message_Update is pending, then it is allowed to written to the Message_Object's content and then deleted by the assigner. I think this approach removes the requirement for an explicit modeling convention for ordered sets, at least within this "client-server" context. In fact, the concept of client / server really isn't required; you just need to know about messages, things that want the messages, and things that want to update the current content of the messages. The assigner was required in this example because I had to guarantee that every waiting recipient, during a given system state, had access to the same update version of a given Message's data. The addition of the "last_update_number" attribute allowed the assigner to enforce this policy. The sequencing behavior was completely removed from the Messages and relegated to the assigner. ------------------------------------------------- Tim Brockwell, EER Systems Battlefield Automation Directorate Redstone Arsenal, Alabama tel. 205.890.4616 / 205.955.6921 bwell@whnt19.com (home) tbrockwell@sed.redstone.army.mil (work) ------------------------------------------------- Subject: Re: (SMU) issues concerning SMALL Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- Gregg Kearnan wrote: > > The idea that events are less efficient than functions is a result > > of naive architectures, > > I'm curious, what is your idea of an architecture that is > not naive? I could waffle for quite a while on this subject - I'll try to keep this short. There are two ways of approaching the problem of producing an architecture + code generator - you can say . Here is OOA - how do I implement an OOA model? or you can say . Here is my application - how should I structure its implementation? How do I populate that structure from this OOA model? The former stance leads to architectures that have state machines and event queues - because those are the abstractions of OOA. I tend to view such architectures as "naive" because they neglect the characteristics of the application. This is often acceptable. Most projects will find a naive architecture that meets the applications non-functional (NF) requirements. I must make it clear that a "naive" architecture is not necessarily simplistic. It may be extreemely clever. I will tend to classify any architecture that is not driven from the application's NF requirements as naive. Most architectures that are driven from the OOA-of-OOA will tend to use procedure call semantics for synchronous services (inc. architectural services) and a higher level mechanism for the event queue. A natural consequence of this is that the events become less efficient than functions. In my opinion, too much emphasis is placed on the desire for architectures and code generators to be reusable. This is a laudable goal, but it must be recognised that not all software is reusable. Code use is at least as important as code re-use. As an example of a non-naive arhcitecture, I recently needed to construct a VHDL testbench. This was already specified using tables and use cases, so it wasn't too difficult to derive an OOA model and populate it. However, before I'd done this analysis (or even thought of using a code generator), I had already sketched out the testbench architecture. It was only when it became obvious that writing the VHDL would be tedious and error prone that we decided to use a code generator. The fact that we were generating code from a formal model did result some changes the the details of the architecture; but the basic structure was unchanged from the original concept. Needless to say, the code generator was not designed to work on any model other than that for which it was designed. It might work for other models: but that would be pure coincidence. Dave. Not speaking for Mitel Semiconductor. -- Dave Whipp, Embedded Systems Group, Mitel Semiconductor, Plymouth, United Kingdom. PL6 7BQ. tel. +44 (0)1752 693277 mailto:david.whipp@gpsemi.com fax. +44 (0)1752 693306 http://www.gpsemi.com Subject: Re: (SMU) issues concerning SMALL lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > Lahman gave an example of the use of > ordered sets in a buffered client-message-server scenario. His > example asserted that a client would create an ordered list of > messages which a server would service asynchronously - placing the > result in the message. Once all the messages were processed, the > client reads back the results using a get-first/get-next mechanism. If that was the impression I gave, then it needs to be clarified. The service domain (in my case the entire application) creates the messages. The client sends a set of setup commands to the application that have an order that is critically important to the domain. The client then sends an execute command. Finally the client retrieves the messages created by the domain in the same order that the commands were issued. [The client is an Atlas compiler; the commands are essentially Atlas program statements; and the application is an instrument driver with an Atlas interface.] > As I mentioned above, I do not feel that this scenario justifies the > addition of orderd sets with get-first/get-next operations is a > primary mechanism within OOA. At most, it would be a service domain. I believe the get-next operation is fundamental to the application. One cannot do an OOA for the application without being aware of this requirement. > However, I'd like to pull the example apart a bit. The scenario > suggests that there are two active objects, which communicate using a > set of passive message objects. I beleive this is incorrect: the > message itself has a lifecycle. > > Some of its states are quite obvious. It is created by the client > with a command for the server (state 1 - waiting for server). At some > point in the future, the server has provided a response (state 2 - > ready for client). A bit later, the client will want to process that > response (state 3 - being processed by client). When the client has > finished, the message is no longer needed (state 4 - termination). There are other objects (e.g., Dynamic Test) in the domain that have life cycles to handle this sort of processing. This designation of active vs. passive is dictated by other considerations in the problem space (e.g., how the instrument driver works). In practice the message shell is created in the bridge service from the client for each setup command. At this point the order of the commands needs to be recorded. The data is filled in by a bridge service to the instrument driver. This does not depend upon the order -- other relationships are used. The message is retrieved by a bridge service from the client. Only at this point is the order important for the messages. It is hard to justify an active object for the message itself in this context. > Given that we now have 3 active objects (or object clusters), we can > think a bit more about assigning responsibility. > > With a passive message object, it seems wrong to use an event for > iteration over messages within the client. With an active message, > however, it becomes natural for the client to issue an event when it > has finished processing a message. The event would be recieved by a > message: causing it to be deleted. At the same time, the message can > be responsible for causing the client to be sent the next message. This is where there is a fundamental problem with the assumptions. There is no client object or domain with a life cycle; the client is external to the entire application and we have no control over it. Our application has to react to the protocol that is defined in the requirements. (This was one reason I chose this example -- to control that degree of modeling freedom. The other reason was that it happened to be the most recent thing I did. ) I think we can go around on this for awhile but I think it will be fruitless because the real problem domain is much more complex (there are 32 objects and 83 KLOC of code just in the Atlas interface domain before one gets to the instrument driver domains). Justifying why we didn't use your approach would require a lot longer E-Mail than even I am willing to do. For example, in practice the messages were not ordered directly. They were tied 1:1 to setup surrogates that had to be ordered to load the hardware properly when the execute command arrived. The bottom line is that we decided that ordering some instances was the most natural, clear, and economic means of satisfying the requirements in the OOA. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) issues concerning SMALL lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp, > Most architectures that are driven from the OOA-of-OOA will tend > to use procedure call semantics for synchronous services (inc. > architectural services) and a higher level mechanism for the > event queue. A natural consequence of this is that the events > become less efficient than functions. Doesn't the architecture still have the problem of identifying the instance even when using a function call directly for the event? The architecture is going to have to have run-time code to translate the identifiers to get the right instance for each event in the OOA (unless one uses references a la SMALL, which you don't want to do. B-)). It seems to me that the call to translate the identifiers is going to be just as costly as a call to an event manager. Similarly, the main thing the event manager provides is management of the queue. If you go to direct function calls, the architecture is still going to have to ensure proper sequencing of calls in the asynchronous model. I would think that is most efficiently done using an event queue manager. (For the synchronous architecture the normal practice is to translate events directly to function calls in the architecture, so the only interesting case is the asynchronous one.) -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) issues concerning SMALL Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > Responding to Whipp, > > Most architectures that are driven from the OOA-of-OOA will tend > > to use procedure call semantics for synchronous services (inc. > > architectural services) and a higher level mechanism for the > > event queue. A natural consequence of this is that the events > > become less efficient than functions. > > Doesn't the architecture still have the problem of identifying the > instance even when using a function call directly for the event? > The architecture is going to have to have run-time code to > translate the identifiers to get the right instance for each > event in the OOA If instance identity is an issue, then it doesn't matter whether functions or events are used: the overhead is still there, somewhere. If we go back to the example (my interpretation of yours): the semantic requirement to get the next message is the same whether you use get_next function call or a ready_for_next (+reply) event. In both cases, you need to find the current message, delete it, and return the next message. It is very unlikely that a general purpose code generator would manage to create the required code; but its quite simple for an application-specific code generator. > (unless one uses references a la SMALL, which you don't want > to do. B-)). #include "std_response.h" I have nothing against references in an implementation. I just don't want them in the model. > Similarly, the main thing the event manager provides is management > of the queue. Then if you don't have a queue, then you don't need a manager. It is interesting to compare the operation of a "queue manager" with that of a "stack manager". The latter is generally more efficient because of the hardware support that is generally provided for function call semantics (many of the big performance gains for RISC were a result of this support). Because of this built-in efficiency advantage, it is generally a good idea to minimise the use of a queue in an implementation. This implementation consideration should not, however, be allowed to bias the model. > If you go to direct function calls, the architecture is still > going to have to ensure proper sequencing of calls in the asynchronous > model. I would think that is most efficiently done using an event queue > manager. (For the synchronous architecture the normal practice is to > translate events directly to function calls in the architecture, so the > only interesting case is the asynchronous one.) I think that, perhaps, I was slightly unclear in my original statement. If a model requires asynchronous semantics then there is no possibility of avoiding it. Therefore there is nothing against which to perform performance comparisons. The interesting cases lie in the fuzzy boundary between the obviously sychronous and the necessarily asychronous. My gripe was against the more general notion that, for a given application, a model that makes extensive use of events is somehow less efficient than one that avoids them. Hence, if you have an orderd set of messages, then iteration using events is considered to be less efficient than iteration using an iterator mechanism such as getFirst/getNext. This is an example of the fairly subtle implementation bias that tends to influence models. There is no built-in performance advantage for the low-level iterator mechansim because events can be mapped onto that mechansim. Events can be mapped onto things other than function calls. Mapping an event onto a function return can be very powerful. An occasional GOTO can be useful, too. Dave. Not speaking for Mitel Semiconductor. -- Dave Whipp, Embedded Systems Group, Mitel Semiconductor, Plymouth, United Kingdom. PL6 7BQ. tel. +44 (0)1752 693277 mailto:david.whipp@gpsemi.com fax. +44 (0)1752 693306 http://www.gpsemi.com Subject: Re: (SMU) issues concerning SMALL lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > > > A natural consequence of this is that the events > > > become less efficient than functions. > If instance identity is an issue, then it doesn't matter whether > functions or events are used: the overhead is still there, somewhere. > > If we go back to the example (my interpretation of yours): the > semantic requirement to get the next message is the same whether > you use get_next function call or a ready_for_next (+reply) event. > In both cases, you need to find the current message, delete it, and > return the next message. > > It is very unlikely that a general purpose code generator would > manage to create the required code; but its quite simple for an > application-specific code generator. I thought you were talking about the processing for a single event -- whether the architecture implemented the event generator process with a queue manager call or directly invoked an action function call. > > (unless one uses references a la SMALL, which you don't want > > to do. B-)). > > #include "std_response.h" > > I have nothing against references in an implementation. I just > don't want them in the model. I realize that. I was just engaging in petard hoisting since a corollary is that if references aren't in the OOA, then the architecture has that overhead on _every_ event generator call. > I think that, perhaps, I was slightly unclear in my original > statement. If a model requires asynchronous semantics then there is > no possibility of avoiding it. Therefore there is nothing against > which to perform performance comparisons. The interesting cases lie > in the fuzzy boundary between the obviously sychronous and the > necessarily asychronous. I had dismissed the synchronous model as being uninteresting because they are typically implemented with direct action calls anyway. I am not sure what you mean by a fuzzy boundary. We usually make a decision at the beginning of the translation about whether a given domain needs to be synchronous or not (assuming we are manually generating). This is typically determined by how bridges are handled. We don't do it on an event-by-event basis because it is too easy to screw up. > My gripe was against the more general notion that, for a given > application, a model that makes extensive use of events is somehow > less efficient than one that avoids them. > > Hence, if you have an orderd set of messages, then iteration using > events is considered to be less efficient than iteration using > an iterator mechanism such as getFirst/getNext. > > This is an example of the fairly subtle implementation bias that > tends to influence models. There is no built-in performance > advantage for the low-level iterator mechansim because events > can be mapped onto that mechansim. Events can be mapped onto > things other than function calls. Mapping an event onto a function > return can be very powerful. An occasional GOTO can be useful, too. Now I am really confused. These paragraphs seem to be saying that you do NOT feel that there is any inherent inefficiency in using events while the original quote (at the top of this message), to which I was responding, seems to say you do. Fortunately it is late afternoon here and I can go have a Frostie to soothe my aching mind. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) issues concerning SMALL Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > I thought you were talking about the processing for a single event -- > whether the architecture implemented the event generator process with > a queue manager call or directly invoked an action function call. Two points: firstly, optimisations can cover multiple events: unfortunately, the explosions of possibilities makes multi-event pattern optimisations difficult in a general purpose architecture. Secondly, you only list two ways on mapping an event. There are others (see my previous email). > > > (unless one uses references a la SMALL, which you don't want > > > to do. B-)). > > > > #include "std_response.h" > > > > I have nothing against references in an implementation. I just > > don't want them in the model. > > I realize that. I was just engaging in petard hoisting since a > corollary is that if references aren't in the OOA, then the > architecture has that overhead on _every_ event generator call. I'll byte on this one - just a little. That corallary does not follow. There are many architectures about that map OOA identifiers onto pointers. Its a simple optimisation that even a simple translation engine can manage. Therefore the overhead need not exist on _any_ event generator call. > I had dismissed the synchronous model as being uninteresting because > they are typically implemented with direct action calls anyway. I > am not sure what you mean by a fuzzy boundary. We usually make a > decision at the beginning of the translation about whether a given > domain needs to be synchronous or not (assuming we are manually > generating). This is typically determined by how bridges are > handled. We don't do it on an event-by-event basis because it is > too easy to screw up. To get efficient code from a model, it may be necessary to concentrate on a few specific aspects of the model and optimise their implementation. Global mappings are inherently non-optimum. It's counterproductive to optimise everything (takes too long). It is often best to start with a simple, naive, architecture and then to profile the results. If the performance is adequate, then there is no need to optimise. If its not, then you need to concentrate on the trouble spots. This performance directed optimisation tends to mitigate against the effects of "its too easy to screw up" because you can focus on one specific element of the model. Automation of code generation is very useful in this type of development. If you make a mistake, you can modify the generator and re-run it without rolling back to an earlier version of the generated code. It allows aggressive optimisations techniques that would be unthinkable in for manual code generation. It also allows you to apply an optimation to the whole model - who knows, it might help somewhere else (or it might break something else: in which case you need to refine the optimisation template!) > > > > A natural consequence of this is that the events > > > > become less efficient than functions. > > Now I am really confused. These paragraphs seem to be saying that > you do NOT feel that there is any inherent inefficiency in using > events while the original quote ([above]), to which I was responding, > seems to say you do. The key word in the original quote was "become". There is no inherent inefficiency in events; but they become inefficient in many general purpose architectures. My message is that, in principle, many events can be implemented in a more efficient way than function calls. Events that _need_ to be queued are a special case (though they can still be optimised). Dave. Not speaking for Mitel Semiconductor. -- Dave Whipp, Embedded Systems Group, Mitel Semiconductor, Plymouth, United Kingdom. PL6 7BQ. tel. +44 (0)1752 693277 mailto:david.whipp@gpsemi.com fax. +44 (0)1752 693306 http://www.gpsemi.com 'archive.9803' -- Subject: Re: (SMU) issues concerning SMALL lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > I'll byte on this one - just a little. That corallary does not > follow. There are many architectures about that map OOA identifiers > onto pointers. Its a simple optimisation that even a simple > translation engine can manage. Therefore the overhead need not exist > on _any_ event generator call. Work has to be done to map the OOA identifiers into pointers. It can't be done at translation time (w/o colorization) because the identifiers may be dynamically assigned. The event generator process is being passed data by value, so it has no context. Therefore the mapping has to occur in the event generator function code at run time on each call. I can imagine a translator that would map the event generator process into two implementation functions, one taking identifiers and one taking references, and then doing some fancy analysis on the action processing to determine whether it would be more efficient to map the identifiers into references once and pass references to multiple event generator invocations or let the event generator function do the mapping. But that is not exactly a simple translator. If it were, one of the commercial translators would be doing it. > To get efficient code from a model, it may be necessary to > concentrate on a few specific aspects of the model and optimise > their implementation. Global mappings are inherently non-optimum. > > It's counterproductive to optimise everything (takes too long). It > is often best to start with a simple, naive, architecture and then > to profile the results. If the performance is adequate, then > there is no need to optimise. If its not, then you need to > concentrate on the trouble spots. This performance directed > optimisation tends to mitigate against the effects of "its too > easy to screw up" because you can focus on one specific element > of the model. > > Automation of code generation is very useful in this type of > development. If you make a mistake, you can modify the generator > and re-run it without rolling back to an earlier version of the > generated code. It allows aggressive optimisations techniques that > would be unthinkable in for manual code generation. It also > allows you to apply an optimation to the whole model - who knows, > it might help somewhere else (or it might break something else: > in which case you need to refine the optimisation template!) I think we are talking about two different things here. In general we use synchronous domain architectures whenever we can because they are simple, easy to debug, and more efficient because the queue management is unnecessary. However, once we recognize that we have asynchronous processing _anywhere_ in the domain, we go to a fully asynchronous architecture. I agree that one could get better performance optimization by playing games on a event-by-event basis. However, there is a potential penalty in that obscure errors can creep into the implementation that may not be found in a simulation suite. Since there is no formalism for mixing synchronous and asynchronous, we regard this as high risk. So we would not attempt to do this unless there was no other recourse to resolve a performance problem. > > > > > A natural consequence of this is that the events > > > > > become less efficient than functions. > > > > Now I am really confused. These paragraphs seem to be saying that > > you do NOT feel that there is any inherent inefficiency in using > > events while the original quote ([above]), to which I was responding, > > seems to say you do. > > The key word in the original quote was "become". There is no inherent > inefficiency in events; but they become inefficient in many general > purpose architectures. > > My message is that, in principle, many events can be implemented in > a more efficient way than function calls. Events that _need_ to be > queued are a special case (though they can still be optimised). OK. As indicated above, though, I worry about mixing the two approaches. It seems to me that this requires an even more complex analysis than straight asynchronous modeling. Even the seemingly obvious synchronous cases, such as self-directed events, have pitfalls -- as Yeager and others pointed out in another thread with several examples. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: (SMU) high-level SMALL assessment peterf@pathfindersol.com (Peter J. Fontana) writes to shlaer-mellor-users: -------------------------------------------------------------------- Hi all - In early December, Steve Mellor invited the members of this forum to review the current draft for SMALL, and comment in this forum. Since then, there has been a steady current of discussion on various detailed SMALL topics. Here at Pathfinder, we wanted to do a thorough review of this draft and try doing some SMALL for real actions from our own and from client models. We have a fairly unique perspective since we actually use ADFDs for process models with all of our own product development and with some of our clients. Our goal was to determine how well SMALL compared to ADFDs and to procedural action languages (PALs). After writing several state actions from 3 different existing "real world" applications, including one ADFD-based system and one using a popular PAL dialect, we have arrived at the following opinions: Some of the ways SMALL is better than ADFDs: - given the current editor technology, SMALL is faster to enter once you memorize the syntax - SMALL supports "else" logic better - the SMALL reference concept is cleaner than the ADFD technique of flowing compositions of identifying attributes - SMALL's relationship navigation is clean and concise - an ADFD for a very complicated action can be quite hard to layout and maintain - SMALL would be easier to manage in this context Some of the ways ADFDs are better than SMALL: - the data flow within an action is quite easy to follow in an ADFD, but it is very difficult to discern from a SMALL segment - memorizing ADFD bubble naming and flow label syntax requires some work, but it is much easier to pick up than the syntax of SMALL - the 2-d world of an ADFD affords more layout/formatting opportunities to enhance readability than a 1-d textual form of experssion Some of the ways SMALL is better than PALs: - the data-flow-oriented nature of SMALL, and its concise palette of primitives keeps the process modeler in an analysis perspective, helping to avoid "implementation leakage" into the process models - SMALL's detachment from implementation may afford the architecture more flexibility in how the actions are translated Some of the ways PALs are better than SMALL: - the common form of expression shared by PALs is nearly universal in this industry for conveying programming concepts, making it easier to learn for the most programmer/analysts - PALs offer more flexibility and capabilities in managing flow of control, with if, while, switch, and for constructs - simplistic or direct translation of PALs to implementations using common programming languages is more direct than SMALL We look forward to the continued development of SMALL. Thank you for this opportunity to participate. _______________________________________________________ Pathfinder Solutions Inc. www.pathfindersol.com | 888-OOA-PATH | | effective solutions for software engineering challenges| | Peter Fontana voice: +01 508-384-1392 | peterf@pathfindersol.com fax: +01 508-384-7906 | _______________________________________________________| Subject: Re: (SMU) issues concerning SMALL Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > Responding to Whipp... > Work has to be done to map the OOA identifiers into pointers. It > can't be done at translation time (w/o colorization) because the > identifiers may be dynamically assigned. This does not matter. It is easy (given a suitable translation engine) to determine the source process of a dataflow. If it is an accessor that is reading a referential attribute, or an identifier; and if relationships are implemented as links; then the link pointer (or instance pointer) can be used for the event destination. (There are obvious complications for compound identifiers - but there are also obvious solutions to these problems). This can all be done at translation time (obviously, the link pointer is read at run-time; but the fact that its going to be read is determined at translation time). > But that is not exactly a simple translator. If it were, one of > the commercial translators would be doing it. The reason why commercial translators don't do it is simple. In the beginning, the problems of code generation were fundamental. The basic concepts needed to be sorted out. In this environment, most optimisations were discarded. Later, once the technology had matured, the most popular SM case tools had moved to reference based semantics for modelling relationships. In this environment, the optimisation is not needed. There are still some code generators that are based on referential attributes. I don't know why these don't use such optimisations. Maybe its because event generation isn't a critical performance limitation in most models (because modellers avoid events because they "know" they are inefficient). > > My message is that, in principle, many events can be implemented > > more efficiently than function calls. Events that _need_ to be > > queued are a special case (though they can still be optimised). > > OK. As indicated above, though, I worry about mixing the two > approaches. It seems to me that this requires an even more complex > analysis than straight asynchronous modeling. Even the seemingly > obvious synchronous cases, such as self-directed events, have > pitfalls -- as Yeager and others pointed out in another thread > with several examples. No one ever said that modelling architectures was simple :-). It is, of course, much easier to do an application specific architecture than a general purpose one. An application specific architecture can ignore any issue that doesn't arise in the application. Also, it only needs to optimise the critical paths. Dave. Not speaking for Mitel Semiconductor. -- Dave Whipp, Embedded Systems Group, Mitel Semiconductor, Plymouth, United Kingdom. PL6 7BQ. tel. +44 (0)1752 693277 mailto:david.whipp@gpsemi.com fax. +44 (0)1752 693306 http://www.gpsemi.com Subject: (SMU) Re: Identifier flow "adornment" peterf@pathfindersol.com (Peter J. Fontana) writes to shlaer-mellor-users: -------------------------------------------------------------------- At 09:17 AM 3/3/98 +0000, shlaer-mellor-users@projtech.com wrote: >Dave Whipp writes to shlaer-mellor-users: >-------------------------------------------------------------------- > ... >This can all be done at translation time (obviously, the link >pointer is read at run-time; but the fact that its going to be >read is determined at translation time). > > >> But that is not exactly a simple translator. If it were, one of >> the commercial translators would be doing it. > >The reason why commercial translators don't do it is simple. In the >beginning, the problems of code generation were fundamental. The >basic concepts needed to be sorted out. In this environment, most >optimisations were discarded. > ... >There are still some code generators that are based on referential >attributes. I don't know why these don't use such optimisations. >Maybe its because event generation isn't a critical performance >limitation in most models (because modellers avoid events because >they "know" they are inefficient). OK - enough of this global handwaving about the "state of the art"! The charter for this forum prevents me from making specific product references, but this joint assumption you've arrived at is FALSE. There is at least one family of architectures commerically available that examine a flow with a set of identifying or formalizing attributes and inject a pointer reference as appropriate - at translation time. Years ago the PT architecture consultants called this technique "adornment". (Thank you - I feel much better now...) _______________________________________________________ Pathfinder Solutions Inc. www.pathfindersol.com | 888-OOA-PATH | | effective solutions for software engineering challenges| | Peter Fontana voice: +01 508-384-1392 | peterf@pathfindersol.com fax: +01 508-384-7906 | _______________________________________________________| Subject: Re: (SMU) high-level SMALL assessment lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Fontana... Nice analysis. Pithy, Pertinent, and Practical. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) issues concerning SMALL lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > This does not matter. It is easy (given a suitable translation engine) > to determine the source process of a dataflow. If it is an accessor > that is reading a referential attribute, or an identifier; and if > relationships are implemented as links; then the link pointer (or > instance pointer) can be used for the event destination. (There are > obvious complications for compound identifiers - but there are also > obvious solutions to these problems). > > This can all be done at translation time (obviously, the link > pointer is read at run-time; but the fact that its going to be > read is determined at translation time). I am not saying that it cannot be done at translation time for those cases that are pure relationship navigation. The models are unambiguous and issues like dynamic polymorphism are not relevant in this situation, so there _has_ to be a way. [Give me a big enough computer and enough time and I will model the universe at full scale and in real time.] However, I just don't see that as a _simple_ process. Beyond your description, which already involves a fairly sophisticated algorithm and architectural data structures, other support is needed, such a doubly linked relationships. I also don't see obvious solutions for compound identifiers when each one is extracted from a different object (though I do not regard the problem as insoluble). Finally it doesn't work for Finds or for extracting subsets from 1:Ms or M:Ms; some other mechanism would be needed for these. I agree that all these problems can be dealt with in the translation -- but to do so would require a seriously sophisticated system. In addition, there is a chicken-and-egg problem in that the translation also has to decide how to implement the relationships. In a truly optimized system (lacking colorization), this could not be done until all the actions had already been examined and at least partially translated. So you are into multi-pass, global optimizations. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) high-level SMALL assessment Steve Mellor writes to shlaer-mellor-users: -------------------------------------------------------------------- Peter, Thanks for a very nice analysis. I agree with HS: Pithy, Parctical and Pertitent. (I like that phrase, I think I'll stea^H^H^H^H borrow it.) We are very interested to see if others in the group have other comments. >From our point of view, SMALL and ADFDs are intended to show the same information. This may require some modification to ADFDs to make them absolutely congruent. We call the modified ADFDs g-SMALL (graphical-SMALL). As for the comparison to PALs, we all have a problem. Assuming we use SMALL instead of existing action languages, (tautologically) we'll end up having to 'program' actions using the language, and we want as good a language as possible. So we'd better deal with the issued raised by the comparison. At 03:45 PM 3/2/98 -0500, you wrote: >Some of the ways PALs are better than SMALL: >- the common form of expression shared by PALs is nearly universal in this >industry for conveying programming concepts, making it easier to learn for >the most programmer/analysts Absolutely true. Of course, if we used PALs common form of expressions, then SMALL would be one of them thar PALs, and not SMALL any longer. There are different components to the language that we may be able to make more congruent. This is a a starter list for discussion: * Data Access: SMALL' can use the Object Constraint Language (OCL) syntax for this, which is closer to many PALs, especially data access ones such as SQL. Example: Select( d: Dog | Name = "Sarzak" ).Weight is OCL for Dog( all, Name = "Sarzak" ).Weight, and you can use the variable 'd' in internal expressions. * Flows: This is the heart of the language, and we intentionally made this as close as possible to UNIX, another 'data flow' language. Note that certain string processing concepts from UNIX/Awk/Perl etc could be useful in building archetypes. * Process Definition: Undefined in SMALL. Can easily be done using a PAL, though I'd hope to be able to run formal proofs against a pre-/post- condition expression of each process. * .... >- PALs offer more flexibility and capabilities in managing flow of control, >with if, while, switch, and for constructs Flexibility cuts both ways. As you pointed out above SMALL has a limited pallete of constructs, and that's a Good Thing. IMO, SMALL has a switch construct of equal power and elegance to a classical PAL. Of course, the testing of the switch value is done in a process, but that's the point. All that we require here, though, is to define a standard 'switch' test process. To make clear what I mean, let me remind you of a comment from HS regarding arithmetic. Not unreasonably, he finds it suboptimal to have to define a process to add two numbers: 1, ~x | add >~x; // have to define add (!) Clearly SMALL should define some simple arithmetic functions to avoid this. Similarly, ~x | switch? !a, !b, !c; // we need to define 'switch' as a standard !a: ... !b: ... As for iteration (for, while, etc), much of this handled by collections. What else do we need? What is missing? As for 'if' style constructs, SMALL requires use of the test? style, and it deliberately prohibits embedding one test inside another. Again, a goal of SMALL is to separate out the processing from the *when* it is executed. Is there another more elegant way to achieve this goal? >- simplistic or direct translation of PALs to implementations using common >programming languages is more direct than SMALL IMO, this is actually an advantage becuase it says that the architecture is taking this responsibility away from the analyst. Of course, I see your point ... >We look forward to the continued development of SMALL. Thank you for this >opportunity to participate. SMALL is only going to be useful if (1) there are implementations, and (2) we can actually use the language to do analysis. I have been writing all my examples lately using SMALL and I find it forces me to think about the right issues. But then that's just one person's opinion. So, bottom line, one question under two scenarios. Scenario 1: There are tools that allow you to write SMALL and that allow you to switch back and forth between SMALL and a g-SMALL rendition of the same action. Question: Does SMALL do the job? Scenario 2: There are tools that allow you to write SMALL but there are NO tools to allow you to see a g-SMALL rendition of the same action. Question: Does SMALL, alone, do the job? -- steve mellor Subject: RE: (SMU) Re: Identifier flow "adornment" "Vock, Michael" writes to shlaer-mellor-users: -------------------------------------------------------------------- > OK - enough of this global handwaving about the "state of the art"! The > charter for this forum prevents me from making specific product references, > but this joint assumption you've arrived at is FALSE. There is at least one > family of architectures commerically available that examine a flow with a > set of identifying or formalizing attributes and inject a pointer reference > as appropriate - at translation time. Years ago the PT architecture > consultants called this technique "adornment". > > (Thank you - I feel much better now...) Hey, Peter, I'm with you. This "pointer" thing was solved long ago. By the way, I know of one proprietary architecture that does this also ;) Mike Vock SRA International > Subject: Re: (SMU) high-level SMALL assessment peterf@pathfindersol.com (Peter J. Fontana) writes to shlaer-mellor-users: -------------------------------------------------------------------- At 07:44 AM 3/4/98 -0800, shlaer-mellor-users@projtech.com wrote: >Steve Mellor writes to shlaer-mellor-users: >-------------------------------------------------------------------- > ... >So, bottom line, one question under two scenarios. > >Scenario 1: There are tools that allow you to write SMALL and that >allow you to switch back and forth between SMALL and a g-SMALL >rendition of the same action. Question: Does SMALL do the job? This approach lets you enter SMALL textually, and then take advantage of the graphical layout for presentation. The simple answer answer is "Yes - SMALL 'does the job'", but the real answer can involve technologies like auto-layout, etc - not a simple feat for a complicated diagram. >Scenario 2: There are tools that allow you to write SMALL but there >are NO tools to allow you to see a g-SMALL rendition of the same action. >Question: Does SMALL, alone, do the job? You spend 10% of your time entering models into a tool, and then the remaining 90% of your time is involved in getting that information back out - basically looking at it in one form or another. I'm going to raise the bar here and say the presentation aspects of SMALL must be improved somewhat before it earns a 'yes' on this one. OK - enough from us at Pathfinder. What do the rest of you ESMUGers think? _______________________________________________________ Pathfinder Solutions Inc. www.pathfindersol.com | 888-OOA-PATH | | effective solutions for software engineering challenges| | Peter Fontana voice: +01 508-384-1392 | peterf@pathfindersol.com fax: +01 508-384-7906 | _______________________________________________________| Subject: Re: (SMU) high-level SMALL assessment "Lynch, Chris D. SD" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Fontana: >peterf@pathfindersol.com (Peter J. Fontana) writes to shlaer-mellor-users: > >You spend 10% of your time entering models into a tool, and then the >remaining 90% of your time is involved in getting that information back out > - basically looking at it in one form or another. I'm going to raise the >bar here and say the presentation aspects of SMALL must be improved somewhat >before it earns a 'yes' on this one. >OK - enough from us at Pathfinder. What do the rest of you ESMUGers think? ^^^^^^^^^^^^^^^^^^^^^^^ I agree completely, and am quite interested to see the verbose form of the action language in how it addresses SMALL's, uh, dense nature . On general principles, I would like to see successful specification and programming languages emulated where possible, for the sake of non-computer scientists who may read my action-routines. ("Successful" in this context I define as having inherent readability and writeability, and some tolerance of typographical errors.) -Chris Lynch Abbott AIS, San Diego, CA Subject: (SMU) Events and Anti-Events smf@cix.compulink.co.uk (Mike Finn) writes to shlaer-mellor-users: -------------------------------------------------------------------- In the Real World, sub-atomic particles called electrons and their anti-matter equivalents, the positrons, are continually being created and destroyed. It can happen that a high energy photon will produce a pair of these particles. Sometime later the same two particles can meet and annihilate each other giving off another photon. The interesting thing here is that a positron (which of course moves forward in time like we all do) is actually an electron travelling backwards in time. It turns out that events can travel backwards in time too! For every event (e) there exists an anti-event (p). The effect of an anti-event (p) is defined to be the opposite of the effect of the original event (e). In other words, an anti-event (p) will reverse the effect of an event (e) leaving the system in the same state as it was before the event (e) occurred. For example, in some domain there is a window object and a line object, a line appears in the window on the users screen. The line object may receive the following events: LNE01: Create line (; x1, y1, x2, y2) LNE02: Move line (line id; x1, y1, x2, y2) LNE03: Delete line (line id;) It can be seen that the event LNE01 has the anti-event of LNE03. Similarly, the anti-event for Delete line is the event Create line. The event Move line is its own anti-event. The point is, the anti-event Delete line (p) can be thought of as the event Create line (e) moving backwards in time to cancel itself out. There are some practical applications of anti-events: implementing an undo/redo facility or perhaps as part of some error recovery mechanism were the system has to be rolled back to a known state or any other situation where you want to reverse time! Mike -- Mike Finn Dark Matter | Email: smf@cix.co.uk Systems Ltd | Voice: +44 (0) 1483 755145 Subject: (SMU) SMALL presentation issues Steve Mellor writes to shlaer-mellor-users: -------------------------------------------------------------------- At 01:45 PM 3/5/98 -0500, peterf@pathfindersol.com (Peter J. Fontana) wrote: >You spend 10% of your time entering models into a tool, and then the >remaining 90% of your time is involved in getting that information back out >- basically looking at it in one form or another. I'm going to raise the >bar here and say the presentation aspects of SMALL must be improved somewhat >before it earns a 'yes' on this one. > >OK - enough from us at Pathfinder. What do the rest of you ESMUGers think? So far, the primary issue concerning presentation that has been raised regarding presentation in SMALL, as I recall anyway, is the terseness of the language. Here's a list of operators in SMALL (not guaranteed to be complete or even exactly correct): , composition > write ! sequence (both to set a order and to guard statements) ? test process indicator >> create --> migrate . data access (and relationship naming) -> traversal (with [ ] to group the relationships) one selection criterion all selection criterion | flow => value assignment ~ local variable indicator / ascending (in a selection) \ descending(in a selection) /\ conjunction \/ disjunction () instance set grouping [ ] statement grouping (aka block) gen event generation blah blah (other keywords for shuffling flows etc) +, - etc for arithemetic in selections Of these, '->', '.', 'one' and 'all' all fall under data access/selection, which needs to be rethought in the context of OCL. >>, -->, /, and \, could be replaced by Migrate, Create, Ascending and Descending respectively. /\, \/ +, - etc are all more than intuitive, though some people may prefer && and || for conjunction/disjunction (and that's fine). 'gen' and the various flow shufflers can be made more verbose. (UML uses 'signal' for the equivalent of 'gen', I think. Fine.) I am personally attached to ',', '|' and '>' since these reflect the basic dataflow aspect of the language, and IMO, do so neatly. This is also, perhaps, the most contentious issue in the language since people are so used to 'b := x + y;' Perhaps we could try: b < add | x, y; or even b := add | x, y; instead of x, y | add > b; though now, of course, the left-to-right flow seems so natural (!) We could use flow typing (as per Dave Whipp's suggestions) to remove the Perl-like local variable indicator ~, and, quite possibly some of the need for value assignment, because we could mix instance references and local variables on the same flow. For example, instead of flowing instance references to an event generator and passing data elements using value assignment ie mydog | Gen D7: ShutUp( "[expletive deleted]" => epithet ); we could say instead mydog, "[expletive deleted]" | Gen D7: ShutUp; The instance grouping '( )' could be replaced by a naming of the set. For example, Crate( ... ) ... > crateset; crateset.X | ... as has already been discussed. However, we also need a way to deal with Crate( ... ).Y | Test? !true, !false; !true: Crate( ) ... ; Perhaps something like: Crate( ... ).Y | Test? !true over TrueCrates, !false over FalseCrates; !true: TrueCrates.X | ... ; Can't say it appeals much...but it doesn't cause a gag reflex (IMO). This leaves '?' and '!' (from Macarthy's ' ? , ' that later found its way into C and C++, and Dijkstra's guard concepts.) This set of ideas is also a 'feature' of the language--the notion that the testing and control flow is OUTSIDE the processing and computation. I am not as enamored of this syntax as I am with '|' and '>', but the idea is crucial to being able to move forward in translation: it is the separation of the 'wiring' from the computation that allows a smart architecture to reorganize the computations. ----------------------------------------------------------------------- As you can see, there is some scope for improving the accessability of the syntax by replacing symbols with text operators. However, this seems to me to be at something of a low level. The broader issues are these: * Flows: The use of '|' '>' and left-to-right assignment * Separated Wiring:: The separation of sequencing from the computations * Scope of Iteration: as defined in OOA96, the Crate example IMO, a good language for translation needs Flows and Separated Wiring. The Flow syntax is borrowed from UNIX and is relatively accessible (IMO). The Test and Guard doesn't seem as neat, but .... any suggestions? The scope of iteration problem is more conceptual than presentation, but we do need to find an accessible way to talk about it. Are there other presentation issues that we should discuss? -- steve Subject: (SMU) Contexts and Naming Steve Mellor writes to shlaer-mellor-users: -------------------------------------------------------------------- Eric V. Smith and Mike Vock (with help from HS) raised an issue with contexts and naming of instance sets. The most extensive message is copied below for context. (Sorry for the length, but it's been a month since the thread started.) The problem is this: How do you refer to an 'intermediate' instance set in SMALL? For example, in the context of a Dog: Self -> [R1.IsOwnedBy->R2.LivesIn]City.X .... // establish City as a set // now I want to refer to the 'intermediate' set of Dog Owner instances // for some reason This problem is especially vexing when reflexive relationships are involved. For example, if we want to compute the weights for all Dogs that are friends of the Dogs owned by an owner. When you say Dog( ), do you mean the instance set defined by traversing from an owner to his/her dogs, or do you mean the set of Dogs that are friends of the owned Dogs? SMALL, as defined, does not have the concept of an 'intermediate' set. That is, in the first example, only City( ) has been established as a set. And in the second, only the 'latest' set of Dogs is established, i.e. the set of Dogs that are friends of the owners dogs. No problem. (I wish) Of course, in practice, there is a need to be able to refer to these intermediate sets. In the first example, it is clumsy (syntactically-- though I can make a powerful argument based on translation that the semantics are just fine), to have to say: Self -> [R1.IsOwnedBy->R2.LivesIn]City.X .... // establish City as a set Self -> [R1.IsOwnedBy]DogOwner.Y .... // establish Owner as a set But it is disastrous in the second example (from a DogOwner action): Self -> [R1.owns->R3.IsFriendOf]Dog.A ... // establish Dog( ) Self -> [R1.owns]Dog.B ... // establish Dog( ) _again_ Clearly, we now have a problem, especially since the description of the language uses weasel words like "the latest set to be established" Lexically? In execution? What!? Argh! Three options have been proposed: (1) Name the instance reference set (ie make one traversal and store the set of instances found in a reference variable) then use that name for the next traversal (2) Refer to the 'context' implicitly. Specifically, the syntax suggested is the Unix directory traversal syntax, ie '..' So saying Dog( .. ) means the _previous_ instance set, and City( .. ) in the first example means the set of DogOwners. A variation is '...' or '../..' to mean the previous-previous one (3) Name the context using a ': label' syntax. For example, Self -> [R1.owns:OwnedDogs->R3.IsFriendOf:DogFriends]Dog.A ... Now the user can use OwnedDogs and DogFriends as reference variables. You will also recall a message earlier today about making SMALL more presentable that proposed the following: Crate( ).Attr | Test? !Big over BigCrates, !Small over SmallCrates; !Big: BigCrates( ).X ... etc Comments???? Please don't worry too much about the precise syntax-- it's the ideas that count. -- steve At 09:03 PM 2/19/98 -0500, "Eric V. Smith" wrote: >-------------------------------------------------------------------- > >While I like the concept here, I think the syntax could be improved by >using names or labels and not by relative positions. Instead of using >the ".." syntax to move up to previous saved contexts, I'd suggest >naming them. Then you could have something like (using your example): > >SomeObject:a->[.reflexive_relationship]SomeObject:b->[.reflexive_rel >ationship]Some >Object:c >{ > SomeObject:c().attribute < 0; // set the attribute for the >"current" >instance of SomeObject > SomeObject:b().attribute < 1; // set the attribute for the >previous >instance of SomeObject > SomeObject:a().attribute < 2; // set the attribute for the >previous- >previous instance of SomeObject >} > >This is using the hypothetical syntax ":label" to name a context. The >primary advantages are clarity and protection from silent changes. If >your first example (with one reflexive relationship) were changed to >the second one (with 2 such relationships), then the ".." notation >might refer to the wrong context if the newly added object were >inserted between the existing ones. With names (instead of relative >positions), you are protected against this. > >Eric. > > >-----Original Message----- >From: MiVock@aol.com [SMTP:MiVock@aol.com] >Sent: Wednesday, February 18, 1998 3:53 PM >To: shlaer-mellor-users@projtech.com >Subject: Re: (SMU) issues concerning SMALL > >MiVock@aol.com writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >In a message dated 98-02-18 11:41:55 EST, you write: > ><< >> Second, I have an issue with "Repeated Access" on page 22. As it >says > >> > >> in the first paragraph "for the rest of the action or synchronous > >> service (or until a different set of references to the same object >is > >> established), the name of the object followed by an empty set of > >> parenthesis refers to that set of references". I can see >situations > >> where I need multiple sets of references to the same object, in > >> particular where I have reflexive relationships. Would it be >possible > >> > >> or desirable to provide an optional name to a set of references, so > >> that multiple references could be re-used? > > > >I agree there is potentially an ambiguity here. However, one can > >eliminate the ambiguity by assigning the set to a reference variable. > I > >believe this would be the preferred mechanism if one needs the set in > >different statements -- it would avoid subsequent miscues during > >maintenance. Some might regard the notation as described to be >handy > >within a single statement, as in the example. > > HS is again right that you can assign the reference colection to a > reference variable. I have to tell you tho' that it makes me nervous > because all of these intermediate variables make it harder for the > translation engine. > >> > >Here's a wacked out, insane idea that I'm sure someone else has had on >this >ambiguity issue. What if the following syntax were supported by SMALL >(forgive >transgressions in the SMALL syntax, I'm still learning): > >SomeObject->[.reflexive_relationship]SomeObject >{ > SomeObject().attribute < 0; // set the attribute for the >"current" >instance of SomeObject > // (the parens are >optional) > SomeObject(..).attribute < 1; // set the attribute for the >previous >instance of SomeObject >} > >You could go really nuts and do > >SomeObject->[.reflexive_relationship]SomeObject->[.reflexive_relatio >nship]Some >Object >{ > SomeObject().attribute < 0; // set the attribute for the >"current" >instance of SomeObject > SomeObject(..).attribute < 1; // set the attribute for the >previous >instance of SomeObject > SomeObject(../..).attribute < 2; // set the attribute for the >previous- >previous instance of SomeObject >} > >There must be some parsing problem here. Fire when ready!!! > >Mike Vock >SRA International (formerly w/Abbott Labs) > > > Subject: Re: (SMU) SMALL presentation issues "Dean S. Anderson" writes to shlaer-mellor-users: -------------------------------------------------------------------- Steve Mellor wrote: [...snip...] > The scope of iteration problem is more conceptual than presentation, > but we do need to find an accessible way to talk about it. > [...snip...] > -- steve > > In our toolset, we deal with the "scope of iteration" issue by creating an SDFD that encloses the "iterated" process and then on the ADFD there is a single process bubble that "invokes" the iterated process. We allow both scaler and sets into the iterated process. A scaler will retain its value on each iteration, and the set(s) will be iterated across. The only restriction is that if there is more than one set into the process, the element count must be the same in all sets. Since SDFDs are already defined for wormholes, we thought it seemed like a good use here as well. Dean S. Anderson Transcrypt International ka0mcm@winternet.com Subject: Re: (SMU) SMALL presentation issues lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Mellor... > So far, the primary issue concerning presentation that has been raised > regarding presentation in SMALL, as I recall anyway, is the terseness > of the language. Not quite. I regard the clutter introduced by the guarding notation to be a detriment. As I pointed out originally, I think this could easily be handled in a separate description so that the only intrusion into the action would be labels on specific lines. > This leaves '?' and '!' (from Macarthy's ' ? , ' > that later found its way into C and C++, and Dijkstra's guard concepts.) > This set of ideas is also a 'feature' of the language--the notion that > the testing and control flow is OUTSIDE the processing and computation. > I am not as enamored of this syntax as I am with '|' and '>', but > the idea is crucial to being able to move forward in translation: > it is the separation of the 'wiring' from the computation that allows > a smart architecture to reorganize the computations. At another level, the main justification for guarding is to accommodate multiprocessor environments. However, I speculate that once the compiler vendors figure out how to optimize for such systems, that will be free in the implementation language and the SMALL guards will be redundant. If that comes to pass it would might be useful to be able to skip the guarding entirely. I believe that most really effective optimization is currently only available for Turing-based languages. Even OO languages like Eiffel use C as a meta language because the C can be optimized effectively while no one has a clue about how to optimize Eiffel directly to machine code. The compiler vendors have gotten pretty good doing Turing after three decades. Therefore I think that the optimization of an action or a synchronous service should be left to the experts rather than customized in an architecture. Since S-M uses the FSM to isolate objects and actions, I do not see the need for the architecture to reorganize computations. > As you can see, there is some scope for improving the accessability > of the syntax by replacing symbols with text operators. However, > this seems to me to be at something of a low level. The broader > issues are these: > * Flows: The use of '|' '>' and left-to-right assignment > * Separated Wiring:: The separation of sequencing from the computations > * Scope of Iteration: as defined in OOA96, the Crate example > > IMO, a good language for translation needs Flows and Separated Wiring. > The Flow syntax is borrowed from UNIX and is relatively accessible (IMO). > The Test and Guard doesn't seem as neat, but .... any suggestions? > The scope of iteration problem is more conceptual than presentation, > but we do need to find an accessible way to talk about it. I agree that there are two levels here. The low level is the syntax while the high level is the semantics. Given that one is doing a data flow language at the semantic level, then whether one uses '|' vs. 'into' or 'from' is a detail. However, I think the into/from syntax in a data flow language leaves absolutely no doubt of the semantics while '|' depends upon one's familiarity with its use in other contexts. Similarly, 'write', '>', ':=', or '=' are all semantically equivalent in a data flow language but their intuitive recognition in the general population goes as: 'write', '=', ':=', and '>'. I submit that in figuring out the syntax one would be better off catering to non-programmers. Automated translation is pretty much a reality today and will certainly be normal in a decade. Donning my pointy hat with the stars on it, I predict a day in the not too distant future when developing OOA models will be a specialty where many of the practitioners (Analysts) may have only a passing knowledge of various 3GLs and arcane operating systems. Developing Architectures might well be a Techy Specialty much akin to today's Unix System Manager. When those budding Analysts are introduced to OOA it would be easier on them if the notation used 'into' and 'write' rather than '|' and '>'. ----------- ----------- Now let me put on my pointy red hat and introduce Chaos and Confusion at the higher, semantic level. My question is: why does the language have to be pure data flow? Why can't the syntax by a hybrid of data flow and conventional Turing notations? I think this would allow complex computations to be represented in a compact manner. Similarly, depth-first iterations could be supported in a clearer and more conventiontal way. I also think it would be useful to introduce the topic near and dear to my heart -- a wider suite of set operations. In this proposal I am not arguing for an either/or situation; rather I propose coexistence. For example, in addition to x, y | multiply > b; the language could support b = x * y; since these are semantically equivalent and any translator can Do the Right Thing with either statement. Beyond the appeal of supporting both Purists and Hackers, the dual notation could be useful for compacting real estate in action presentations in state models for computational sequences -- a benefit that should not be underestimated when most of your debugging is done with STD printouts in your lap. A more fundamental Turing extension would be a better definition of if/else clauses. If you are operating on a set, the SMALL form probably works better. However, if the condition is not a set, then the SMALL form is awfully wordy and clumsy compared to if/else with logical operators and blocks. It seems to me that the Purist data flow form could be preserved with the more verbose syntax (into, write, etc.) while the symbols could be used for the Turing-style alternatives. The basic building blocks (statements, tests, iterations, and blocks) are already supported by SMALL. Thus it seems to me that the issue comes down to whether it is aesthetically pleasing to have a larger language with redundant constructs. First, I submit that it would only be syntactically larger. For example, there would only be one iteration _structure_ supported but there would be two syntaxes for that same structure. This is in contrast to languages where 'while', 'for', or 'do...until' represent different semantics and result in different executions. Second, I submit that there are larger concerns than aesthetics. The value of a syntactically larger language would lie in the fact that it could appeal to a much wider audience. The reality is the S-M tenuously holds a narrow niche and that it needs to expand. One way to do this is to appear more user friendly to a wider audience. C++ is a horrible OO language but it owns the marketplace because it appealed to an army of C programmers. SMALL has the opportunity to provide both the Purist and Hacker appeal without sacrificing rigor as C++ did. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) Contexts and Naming lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Mellor... Three options have been proposed: > (1) Name the instance reference set (ie make one traversal and store the > set of instances found in a reference variable) then use that name > for the next traversal > (2) Refer to the 'context' implicitly. Specifically, the syntax suggested > is the Unix directory traversal syntax, ie '..' So saying Dog( .. ) > means the _previous_ instance set, and City( .. ) in the first example > means the set of DogOwners. A variation is '...' or '../..' to mean > the previous-previous one > (3) Name the context using a ': label' syntax. For example, > Self -> [R1.owns:OwnedDogs->R3.IsFriendOf:DogFriends]Dog.A ... > Now the user can use OwnedDogs and DogFriends as reference variables. I would rule out (2) because of the corner where a set of City seems to produce a subset of DogOwners. Also the ../.. syntax can get confusing when multiple levels are involved. I don't want to have to count entries in complex statements to figure out what I am looking at or find what I want. I think I would favor (1) over (3) on the general principle that complex statements are harder to grok. (1) effectively forces the analyst to use multiple simple statements in favor of a large, wordy one. On a practical note, when it becomes apparent after the initial action is written that one needs the intermediate set, one already has it with (1) but one needs to modify existing statements to get it with (3). I have gotten in the habit of doing (1) expressly for this reason. [The problem is not the key strokes -- it is the fact that inserting the name will usually cause line wrap and whatnot that also needs to be fixed up for readable code. As a general rule I prefer to avoid multi-line statements, which is tough enough already when using meaningful identifiers.] -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) Events and Anti-Events "John D. Yeager" writes to shlaer-mellor-users: -------------------------------------------------------------------- Mike Finn wrote: > > smf@cix.compulink.co.uk (Mike Finn) writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > . . . > > For every event (e) there exists an anti-event (p). The effect of > an anti-event (p) is defined to be the opposite of the effect of the > original event (e). In other words, an anti-event (p) will reverse > the effect of an event (e) leaving the system in the same state as > it was before the event (e) occurred. > Oddly enough, the Time Warp operating system for parallel processing of simulations had such a concept. Objects send each other events with reception times set into the future. Various processors are choosing the free object with the earliest posted event. It may happen that an object on another processor will send this object an event with an earlier event time. If this happens, the recipient of the late event will cancel any events it sent as a response to having executed the later event out of order by sending antievents. These annihilate with the events they are canceling if they meet in the queue, or cause backout to occur. -- John Yeager Cross-Product Architecture Lucent Technologies, Inc., Bell Labs johnyeager@lucent.com Business Communications Systems voice: (732) 957-3085 200 Laurel Ave, 4C-514 fax: (732) 957-4142 Middletown, NJ 07748 Subject: (SMU) Recursive Design Book yuki@toyo.co.jp writes to shlaer-mellor-users: -------------------------------------------------------------------- TO : Mr. Steve Mellor, Project Technology Inc. FROM : Yukitoshi Okumura, TOYO Corporation SUBJECT : Recursive Design Book Dear Steve, How have Sally and you been? How has your big... white dog, I'm sorry to forget his name, been? Just today, we have completed to translate Leon's book. It'll be published by the end of this March. We hope that this Japanese book to help Japanese OO Analyst. We think that it's time to think next issue. We are very much interested in your next book. Because, as I said to you, we need more formal approach to the OO design. Could you kindly let me know the progress or publishing date of the RD book? Thank you. Best Regards, Yukitoshi Okumura -------------------------------------------------------------------------- Yukitoshi Okumura | Tel:+81-3-5688-6800(Yushima Office) TOYO Corporation | Fax:+81-3-5688-6900(Yushima Office) 26-9, Yushima 3-chome, Bunkyo-ku | Tel:+81-462-47-6333(Atsugi Office) Tokyo 113-8514, Japan | e-mail yuki@toyo.co.jp | -------------------------------------------------------------------------- Subject: RE: (SMU) Contexts and Naming "Eric V. Smith" writes to shlaer-mellor-users: -------------------------------------------------------------------- I also dislike (2) for the reasons stated. I disagree with ruling out (3) because it allows for complex single statements. That's exactly the reason I suggested it to begin with! Let me state my ulterior motive for needing such complex statements, which I think also sheds some light on SMALL in general. I'm using SMALL as the basis of a language from which I plan to derive SQL SELECT statements. In order to generate single SELECT statements from relationships that are reflexive, I need to have this named association syntax. It lets me do things like: SELECT DogFriends.name FROM people DogFriends, people OwnedDogs WHERE {whatever formalizes this relationship is true, and where whatever expressions in the SMALL statement are true} Since I know what formalizes the relationships, I can generate the WHERE clause directly. In order to avoid intermediate storage, I need the named contexts. I suppose if I did have to do (1) I could analyze that statements to remove the intermediate storage, but I'd rather the analyst just specify it and not force this on some translator. This is particularly nasty if there are intermediate statements between where the reference variable is created and where it is used that don't involve the reference variable. I thought one of the purposes of the language was to remove stuff like this (where the problem is being over-specified), not add it. Adding the label syntax from (3) wouldn't rule out using (1) if you feel it makes the statement more readable. But I think to eliminate it to force simpler statements is a mistake. It seems sort of arbitrary, like saying no statement can be more that 80 characters, and you must use intermediate storage to prevent them from being so long! BTW, I've got another issue that I've not thought all the way through, but I'll put it here for possible public humiliation. It appears that using SMALL I can only select attributes from the terminal object in a statement. Using Steve's original example, I can select attributes from DogFriends but not OwnedDogs or Self. But suppose what I really wanted was to get the names of the DogFriends, along with the names of the OwnedDogs. In my SELECT statement example, I want to do: SELECT DogFriends.name, OwnedDogs.name FROM ... {rest of example the same}. In SMALL, I don't think I can select attributes from an intermediate object like this, but I'm not sure why I'm not allowed to do it. Is it because (a) it's a bad idea, because (b) the syntax might be ugly, or because (c) I don't fully understand the problem, and if I changed the model I wouldn't need to get at the attributes this way. A colleague suggests it might be (c), but I think I can see situations where I definitely want to do this. Maybe I'm blinded by my translation into SELECT statements, however. When I started using SMALL, the terseness really turned me off. As I become more comfortable with it, the thought of making it less terse is definitely NOT appealing. I really like being able to specify complex relationship walking on (usually) a single line. Eric. -----Original Message----- From: lahman [SMTP:lahman@atb.teradyne.com] Sent: Wednesday, March 11, 1998 11:20 AM To: shlaer-mellor-users@projtech.com Subject: Re: (SMU) Contexts and Naming lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Mellor... Three options have been proposed: > (1) Name the instance reference set (ie make one traversal and store the > set of instances found in a reference variable) then use that name > for the next traversal > (2) Refer to the 'context' implicitly. Specifically, the syntax suggested > is the Unix directory traversal syntax, ie '..' So saying Dog( .. ) > means the _previous_ instance set, and City( .. ) in the first example > means the set of DogOwners. A variation is '...' or '../..' to mean > the previous-previous one > (3) Name the context using a ': label' syntax. For example, > Self -> [R1.owns:OwnedDogs->R3.IsFriendOf:DogFriends]Dog.A ... > Now the user can use OwnedDogs and DogFriends as reference variables. I would rule out (2) because of the corner where a set of City seems to produce a subset of DogOwners. Also the ../.. syntax can get confusing when multiple levels are involved. I don't want to have to count entries in complex statements to figure out what I am looking at or find what I want. I think I would favor (1) over (3) on the general principle that complex statements are harder to grok. (1) effectively forces the analyst to use multiple simple statements in favor of a large, wordy one. On a practical note, when it becomes apparent after the initial action is written that one needs the intermediate set, one already has it with (1) but one needs to modify existing statements to get it with (3). I have gotten in the habit of doing (1) expressly for this reason. [The problem is not the key strokes -- it is the fact that inserting the name will usually cause line wrap and whatnot that also needs to be fixed up for readable code. As a general rule I prefer to avoid multi-line statements, which is tough enough already when using meaningful identifiers.] -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) Contexts and Naming peterf@pathfindersol.com (Peter J. Fontana) writes to shlaer-mellor-users: -------------------------------------------------------------------- At 07:18 PM 3/10/98 -0800, shlaer-mellor-users@projtech.com wrote: >Steve Mellor writes to shlaer-mellor-users: > ... >Three options have been proposed: > (1) Name the instance reference set> ... I like this the best - it can be very clean, and it the most flexible. > (2) Refer to the 'context' implicitly. ... I dislike this - it gets hard to follow and track things like this in a complex action. It *really* tough on beginners (which is all of us right now). > (3) Name the context using a ': label' syntax. ... In general I'm in favor of C/C++ coding standards that prohibit nesting too much "stuff" on a single line - in the interest of readibility. I don't like 3) for this reason. Summary: 1) is "good+"; 2) is "bad-"; 3) is "fair-" _______________________________________________________ Pathfinder Solutions Inc. www.pathfindersol.com | 888-OOA-PATH | | effective solutions for software engineering challenges| | Peter Fontana voice: +01 508-384-1392 | peterf@pathfindersol.com fax: +01 508-384-7906 | _______________________________________________________| Subject: Re: (SMU) SMALL presentation issues Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Steve Mellor: A few rambling thoughts: > As you can see, there is some scope for improving the accessability > of the syntax by replacing symbols with text operators. However, > this seems to me to be at something of a low level. The broader > issues are these: > * Flows: The use of '|' '>' and left-to-right assignment > * Separated Wiring: The separation of sequencing from the computations > * Scope of Iteration: as defined in OOA96, the Crate example > > IMO, a good language for translation needs Flows and Separated Wiring. > The Flow syntax is borrowed from UNIX and is relatively accessible > (IMO). The Test and Guard doesn't seem as neat, but .... any > suggestions? > The scope of iteration problem is more conceptual than presentation, > but we do need to find an accessible way to talk about it. > >Are there other presentation issues that we should discuss? One of the problems with the unix style for flow description is that it assumes that primary input and output flows exist. stdin goes in and stdout comes out. Anything else needs files (variables). (stderr never has been handled well with pipes). Therefore the use of the same style in SMALL makes the assumption that ADFDs are like that - one [composed] flow in, and one out. This frequently isn't the case. Thus the clumbsiness of guards and of labels in chained navigation. Transforms and wormholes are similarly challenged. A good source of languages for describing more complex flows of information are found in hardware description languages. The structural subset of VHDL appears well suited the the task that SMALL is attempted to perform. It is also easy to relate it to ADFD diagrams. However, assuming that you want to stick with the SMALL style... SMALL does not completely separate the function and sequence. Accessors define their filters. It may be better to name a test process within the accessor instead of expressing the computation directly. Also, the space in the language that is taken by the filter expression could possibly be used for assignment of intermediate navigation outputs: A(one) -> [R1] B(all, >myB) -> [R2] C(all) > myC I am not sure why the "->" is needed for relationship navigation. If the [R...] syntax it treated in the same way as a keyword (e.g. gen, link, etc.) then the above navigation could be expressed as: A(one) | [R1] B(all) > myB | [R2] C(all) > myC I am not whether the "> myB" bit belongs within brackets. Its not clear how to extract a combination of references and dataflows using this notation. I have mentioned previously that I would like to be able to assign test outputs to guards by name, as well as by position. A guard is just a dataflow with no data; so the same assignment and composition mechanisms should apply. This would become essential for tests with a large number of outputs, such as switch statements. [In reply to Lahmans comment about guards being justified by multi-processor issues; and that they could be eliminated by good compilers: I completely disagree. The role of guards is to define conditional execution. Its just a different syntax for "if" statements. If a guard is needed then it is not possible to implement correct functionality without equivalent semantics in the implementation]. I do not like the overloading of the ~ operator. It is used for both named dataflows and for composed dataflows; and elements within are accessed by position. I fell that the two concepts should be separated. (possibly allowing guards and dataflows to be mixed within one composed flow). The scope of iteration, whilst conceptually simple, does need better specification. A pase process definition must define whether its inputs and outputs are sets or scalars. This information can be combined with the multiplicity of the dataflows entering the derived process to determine the multiplicity of the derived processes. The only real problem occurs when two inputs on a base process that are scalar have different sizes of flow on the derived process. It would be simple to make this illegal, but it may be useful to define the concept of holes in a dataflow to allow the situaltion to arise in some situations. Imagine an accessor returns a set of instances with attributes x and y. Now, suppose x goes through an is_even filter; and y goes through an is_divisible_by_three filter. You then want to take the resulting flows into a "multiply" transform to provide an output set. Conceptually it is obvious what the output set must be (all members of the output set would be even multiples of three.) Scope of iteration must resolve this network. All the processes after the accessor have scalar base processes - filters are (1 -> 1c); and the transform is (1 * 1 -> 1). There is only one set generator in the network (the accessor), so all the subsequent sets must be based on its initial output. The scope of the iteration is defined by the converging of diverged flows. The flow from the output of a filter to the input of the multiple is the same size as the filter's input, but it has holes in it. It is, perhaps, easier to think of the filter as a test process whose output is a set of guards. In small, we then have: A(all) > (x,y) x | isEven ? true => !even y | isMultOf3 ? true => !ofThree !even:!ofThree: (x,y) | mutliply > ~z An this brings me to another issue for the syntax of SMALL: the distinction between control and data flow. Guards are indicated by both the '!' and the ':'. The semantics of an ADFD should allow me to say: (!a, !b, ~x, ~y) | multiply > ~z The multiply process is only executed if all the inputs are available. The above syntax would handle ANDing guards. What about OR? In an ADFD, an OR may be constructed as a joining of the control flow before it reaches a process. There are many ways of showing the combination; for example: !(a|b), ~x, ~y | multiply I think that if we can define scope of iteration, then it should also be possible to define negated guards. Dave. Not speaking for Mitel Semiconductor. -- Dave Whipp, Embedded Systems Group, Mitel Semiconductor, Plymouth, United Kingdom. PL6 7BQ. tel. +44 (0)1752 693277 mailto:david_whipp@mitel.com fax. +44 (0)1752 693306 http://www.gpsemi.com Subject: Re: (SMU) Events and Anti-Events smf@cix.compulink.co.uk (Mike Finn) writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to John D. Yeager... > Oddly enough, the Time Warp operating system for parallel processing of > simulations had such a concept. Objects send each other events with > reception times set into the future. Various processors are choosing > the free object with the earliest posted event. It may happen that an > object on another processor will send this object an event with an > earlier event time. If this happens, the recipient of the late event > will cancel any events it sent as a response to having executed the > later event out of order by sending antievents. These annihilate with > the events they are canceling if they meet in the queue, or cause > backout to occur. Despite its name (Time Warp), I think time ordering events in this way implies the universal time of Newtonian physics; that time is the same everywhere. On the other hand, time in Shlaer-Mellor OOA seems to work more like time in Einstein's Special Theory of Relativity. The only ordering of events you can depend on is when one instance sends multiple events to another instance. Since I've introduced some Quantum Theory, I thought I might as well go the whole hog and mention Relativity too. B-) Strangely, there is an article in this weeks New Scientist magazine about a new quantum theory of information being the most fundamental level of reality. Mike -- Mike Finn Dark Matter | Email: smf@cix.co.uk Systems Ltd | Voice: +44 (0) 1483 755145 Subject: RE: (SMU) Contexts and Naming "Eric V. Smith" writes to shlaer-mellor-users: -------------------------------------------------------------------- As I've said, I favor (3). If you don't like it, I suggest you do as your note suggests and develop an internal standard so that you don't use it, much like any coding standard. The language already allows (1), which you can continue to use if you don't like the proposed extension. However, I think that explicitly creating temporaries definitely falls into the category of over-specifying the problem. I especially think so because you could use this temporary at arbitrary places in the code that follows, and specify arbitrary intervening statements. To me, this is just like the example in the SMALL document where unrelated code is placed together. It fails the ham sandwich test. Eric. -----Original Message----- From: Peter J. Fontana [SMTP:peterf@pathfindersol.com] Sent: Thursday, March 12, 1998 12:30 PM To: shlaer-mellor-users@projtech.com; shlaer-mellor-users@projtech.com Subject: Re: (SMU) Contexts and Naming peterf@pathfindersol.com (Peter J. Fontana) writes to shlaer-mellor- users: -------------------------------------------------------------------- At 07:18 PM 3/10/98 -0800, shlaer-mellor-users@projtech.com wrote: >Steve Mellor writes to shlaer-mellor-users: > ... >Three options have been proposed: > (1) Name the instance reference set> ... I like this the best - it can be very clean, and it the most flexible. > (2) Refer to the 'context' implicitly. ... I dislike this - it gets hard to follow and track things like this in a complex action. It *really* tough on beginners (which is all of us right now). > (3) Name the context using a ': label' syntax. ... In general I'm in favor of C/C++ coding standards that prohibit nesting too much "stuff" on a single line - in the interest of readibility. I don't like 3) for this reason. Summary: 1) is "good+"; 2) is "bad-"; 3) is "fair-" _______________________________________________________ Pathfinder Solutions Inc. www.pathfindersol.com | 888-OOA-PATH | | effective solutions for software engineering challenges| | Peter Fontana voice: +01 508-384-1392 | peterf@pathfindersol.com fax: +01 508-384-7906 | _______________________________________________________| Subject: Re: (SMU) Contexts and Naming lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Smith... > I also dislike (2) for the reasons stated. I disagree with ruling out > (3) because it allows for complex single statements. That's exactly > the reason I suggested it to begin with! > > Let me state my ulterior motive for needing such complex statements, > which I think also sheds some light on SMALL in general. I'm using > SMALL as the basis of a language from which I plan to derive SQL SELECT > statements. In order to generate single SELECT statements from > relationships that are reflexive, I need to have this named association > syntax. It lets me do things like: > SELECT DogFriends.name > FROM people DogFriends, people OwnedDogs > WHERE {whatever formalizes this relationship is true, and where > whatever expressions in the SMALL statement are true} Let me make sure I understand this. You are developing _another_ language that is based upon SMALL, so you want SMALL to include syntax you need for that other language to allow that language to do the highly specialized task of generating SQL statements. I don't think so. In SQL there is some justification for large expressions because the syntax is so specialized. [Not to mention that no one actually writes SQL anymore because they have IDEs that generate it.] However, SMALL is a general purpose language designed to describe the processing for any application that can be described with an OOA. I don't think it would be a good idea to introduce syntax into SMALL solely to facilitate only a small portion of applications -- especially if those applications are going to be described in another language anyway. In my view the maintainability of applications should be a major consideration in the design of languages. History has demonstrated that large, complex statements are more difficult to maintain that sequences of simple ones (other things being equal). If lack of readability or poor semantic comprehension are regarded as defects, then a language should be designed to prevent them. One way to do this is to eliminate large, complex statements as an option. > Since I know what formalizes the relationships, I can generate the > WHERE clause directly. In order to avoid intermediate storage, I need > the named contexts. I suppose if I did have to do (1) I could analyze > that statements to remove the intermediate storage, but I'd rather the > analyst just specify it and not force this on some translator. This is > particularly nasty if there are intermediate statements between where > the reference variable is created and where it is used that don't > involve the reference variable. I thought one of the purposes of the > language was to remove stuff like this (where the problem is being > over-specified), not add it. I am having a problem getting a handle on the level of abstraction you are working at. If you are really doing a SQL compiler, then I would think your domain would have BNF-like objects (e.g., Production, Syntax Table, Syntax Row, Token, Terminal, etc.) and you would have syntax tables with rows for table and row identifier names. The SQL statement would be generated out of a BNF Production's action that took the relevant tokens as arguments (e.g., a transform process or an action of particular Production subtype). This would be fairly straight forward and I don't see where (1) would be a problem. If the problem is that your data stores are only accessible via SQL, then I think that is a pure implementation issue. While I tend to advocate making the OOA reflect how software works on computers in general (e.g., ordered sets like arrays), I don't think that applies here. This is clearly a very specific implementation issue. I would go even further to say that I don't see (1) being a major problem in this case. The translation has to keep track of the intermediate data anyway. What you are really talking about is an optimization where multiple SQL statements are combined into a single statement; you could do it with individual statements. I would argue that most optimizations require extra effort in the translation and I don't see this as being particularly tough because its scope is a single action. Compare that to determining whether you need an index for instances. Basically I see this optimization as a small prices to pay in particular implementations compared to better readability across all applications. > Adding the label syntax from (3) wouldn't rule out using (1) if you > feel it makes the statement more readable. But I think to eliminate it > to force simpler statements is a mistake. It seems sort of arbitrary, > like saying no statement can be more that 80 characters, and you must > use intermediate storage to prevent them from being so long! Once again the younger generation demonstrates a thorough lack of historical perspective. FYI, the 80 characters is far from arbitrary. For nearly three decades the Hollerith format for punched cards was The Standard. Even the early screen editors would truncate lines more than 80 characters. To this day you can print using all defaults and be guaranteed to get no truncation or line wrap on any printer (except specialized ones like cash register tapes) if your lines are all less than 81 characters. When you get an opportunity to debug at a user's site with no sources except hardcopy that has truncated every third line you will start to gain an appreciation for the value of lines that are no longer than 80 characters. > BTW, I've got another issue that I've not thought all the way through, > but I'll put it here for possible public humiliation. It appears that > using SMALL I can only select attributes from the terminal object in a > statement. Using Steve's original example, I can select attributes > from DogFriends but not OwnedDogs or Self. But suppose what I really > wanted was to get the names of the DogFriends, along with the names of > the OwnedDogs. In my SELECT statement example, I want to do: > SELECT DogFriends.name, OwnedDogs.name > FROM ... {rest of example the same}. > > In SMALL, I don't think I can select attributes from an intermediate > object like this, but I'm not sure why I'm not allowed to do it. Is it > because (a) it's a bad idea, because (b) the syntax might be ugly, or > because (c) I don't fully understand the problem, and if I changed the > model I wouldn't need to get at the attributes this way. A colleague > suggests it might be (c), but I think I can see situations where I > definitely want to do this. Maybe I'm blinded by my translation into > SELECT statements, however. I think it is all of the above, but primarily due to your devotion to a single statement SELECT. Years ago I became attuned to the need to split up statements when I had to maintain a routine that computed the Muskat-Hoss Mass Balance Equation for petroleum reservoirs. The original author wrote the whole equation in one PL/I statement that extended over five lines, had over thirty parentheses, and a maximum parentheses nest level of eight. Ever since that character building experience I have studiously broken statements up using temporary variables and I have blithely relied upon the compiler to Do the Right Thing for optimization. And every time I have to break up a statement to get at an intermediate value when doing maintenance I feel vindication. Thus small statements have become a guidepost on the Path to Enlightenment for me. I see them as the natural Order of Things and nowadays it would probably never occur to me not to extract an intermediate value with a separate statement, even if I could. So we all have our biases. However, I think I can make two arguments for why SMALL should use separate statements to extract intermediate values that are used elsewhere: (1) If a design goal of SMALL is to track nicely with the ADFD notation, then this is the ADFD Way. (2) Functional encapsulation is a valid Good Practice like data encapsulation; it just doesn't have as good a press agent. Generally you don't want your functions to do double duty. If you need to extract two separate pieces of information that require different extraction processes, then do it in two different functions. I see the SELECT mode as an analog of this -- getting the set is one thing; filtering it is another. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) Contexts and Naming "Eric V. Smith" writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > > lahman writes to shlaer-mellor-users: > -------------------------------------------------------------------- > Let me make sure I understand this. You are developing _another_ language > that is based upon SMALL, so you want SMALL to include syntax you need for > that other language to allow that language to do the highly specialized > task of generating SQL statements. I don't think so. I misspoke. I'm going to use SMALL if it will do what I need. If it won't, I'll either extend it or use a more complete language. Here's my point: SMALL, among other things, allows you to select object instances and attributes. SQL, among other things, allows you to select object instances and attributes (if you buy into an object is a table). If SQL allows a certain type of selection but SMALL doesn't, might not it be enlightening to understand the differences? Also keep in mind that SMALL already allows (in section 6.16) for a subset of the more general behaviour that we're talking about here. I'd just like to see SMALL's syntax extended to be more complete. The restriction is that you can't access multiple instances if they happen to be to the same object. To me, this seems restrictive and arbitrary. It might not seem so to you. I'd rather the language be more general and less narrow. > In my view the maintainability of applications should be a major > consideration in the design of languages. History has demonstrated that > large, complex statements are more difficult to maintain that sequences of > simple ones (other things being equal). If lack of readability or poor > semantic comprehension are regarded as defects, then a language should be > designed to prevent them. One way to do this is to eliminate large, > complex statements as an option. I agree completely on maintainability and readability. However, I think that adding temporaries it overspecifying the problem and makes this problem worse rather than better. Take a look at the example in 6.2 of the SMALL document. Here control is overspecified because the user had too much power in putting things in the control loops. Could a good compiler figure this out and separate out the logic into independent structures? Sure. Is it difficult? Yes. I've done it. I think you're headed down the same path with your argument. > If the problem is that your data stores are only accessible via SQL, then I > think that is a pure implementation issue. While I tend to advocate making > the OOA reflect how software works on computers in general (e.g., ordered > sets like arrays), I don't think that applies here. This is clearly a very > specific implementation issue. Yes, but for every business system that I've been involved in, this is a real issue. In my world (corporate IT systems), SQL _is_ the way datastores work, just as arrays are a real part of computing. > I would go even further to say that I don't see (1) being a major problem > in this case. The translation has to keep track of the intermediate data > anyway. What you are really talking about is an optimization where > multiple SQL statements are combined into a single statement; you could do > it with individual statements. I would argue that most optimizations > require extra effort in the translation and I don't see this as being > particularly tough because its scope is a single action. Compare that to > determining whether you need an index for instances. Good point. I agree that it's not impossible to optimize this away, I just feel that the temporary is artificial and not part of the way that an analyst thinks about the problem. The analyst can solve this problem one way, then as soon as the problem involves a reflexive relationship, the way of describing the problem _must_ change. Again, it seems arbitrary to me to treat such reflexive relationships as second class and force a different notation when using them. You seem to think this is okay. We disagree. > Once again the younger generation demonstrates a thorough lack of > historical perspective. FYI, the 80 characters is far from > arbitrary. For nearly three decades the Hollerith format for punched cards > was The Standard. Even the early screen editors would truncate lines more > than 80 characters. To this day you can print using all defaults and be > guaranteed to get no truncation or line wrap on any printer (except > specialized ones like cash register tapes) if your lines are all less than > 81 characters. Hey, I resent being characterized as the younger generation! I know all too much about Hollerith cards and card decks. And your version of computing history seems to omit the Apple II with its 40 column display! -- Eric V. Smith | For opinion in good men is but knowledge EricSmith@windsor.com | in the making. Windsor Software Corp +----------------------------------+ John Milton http://www.windsor.com/ Windows NT, Unix, SQL Server | 1608-74 Subject: RE: (SMU) Contexts and Naming Steve Mellor writes to shlaer-mellor-users: -------------------------------------------------------------------- At 05:59 AM 3/12/98 -0500, you wrote: >"Eric V. Smith" writes to shlaer-mellor-users: >-------------------------------------------------------------------- >BTW, I've got another issue that I've not thought all the way through, >but I'll put it here for possible public humiliation. It appears that >using SMALL I can only select attributes from the terminal object in a >statement. Using Steve's original example, I can select attributes >from DogFriends but not OwnedDogs or Self. But suppose what I really >wanted was to get the names of the DogFriends, along with the names of >the OwnedDogs. In my SELECT statement example, I want to do: >SELECT DogFriends.name, OwnedDogs.name >FROM ... {rest of example the same}. >In SMALL, I don't think I can select attributes from an intermediate >object like this, but I'm not sure why I'm not allowed to do it. Is it >because (a) it's a bad idea, because (b) the syntax might be ugly, or >because (c) I don't fully understand the problem, and if I changed the >model I wouldn't need to get at the attributes this way. A colleague >suggests it might be (c), but I think I can see situations where I >definitely want to do this. Maybe I'm blinded by my translation into >SELECT statements, however. There are two issues here, one of which depends on the other. The first issue is the question as to whether .. ->[Rx->Ry->Rz] is truly the same as ... ->[Rx->Ry] >temp; temp -> [Rz]. IMO, they are NOT. Consider four objects A, B, C, D connected by four relationships R1-4, where R4 is the composition of the other three. Let's say they're organized as: A [R1] B [R2] C [R3] D [R4=R1+R2+R3] A. Starting from an instance of A, we can traverse to instances of C. via R4 and R3 (ie A [R4->R3] C). We store the result in temp, then we can go from the temporary to B via R2. The implementation of the first relationship traversal A to C is actually going to be A [R1->R2] because of the composition. The second relationship traversal (from C to B via R2) is fine. Had we instead formalized this relationship as a single traversal, then the traversal A [R4->R3->R2] B would have been replaced by the translator, to a simple traversal of R1. Clearly the results are the same whichever way you represent it, but the fact that A [R4->R3->R2] B is the same as A [R1] is lost. The second issue is whether it's OK to access the data of an object 'along the way'. For example, A -> [R4].access_of_D -> [R3->R2] B. Following the logic above, we never even 'visit' D, so what could the statement mean???? I believe it's possible to define a reasonable semantic for this kind of statement--after all, we do seem to have an understanding of its meaning--but to my mind, this violates the underlying meta-model of the language required for translation. -- steve mellor Subject: RE: (SMU) Contexts and Naming Steve Mellor writes to shlaer-mellor-users: -------------------------------------------------------------------- At 05:59 AM 3/12/98 -0500, you wrote: >"Eric V. Smith" writes to shlaer-mellor-users: >-------------------------------------------------------------------- >BTW, I've got another issue that I've not thought all the way through, >but I'll put it here for possible public humiliation. It appears that >using SMALL I can only select attributes from the terminal object in a >statement. Using Steve's original example, I can select attributes >from DogFriends but not OwnedDogs or Self. But suppose what I really >wanted was to get the names of the DogFriends, along with the names of >the OwnedDogs. In my SELECT statement example, I want to do: >SELECT DogFriends.name, OwnedDogs.name >FROM ... {rest of example the same}. >In SMALL, I don't think I can select attributes from an intermediate >object like this, but I'm not sure why I'm not allowed to do it. Is it >because (a) it's a bad idea, because (b) the syntax might be ugly, or >because (c) I don't fully understand the problem, and if I changed the >model I wouldn't need to get at the attributes this way. A colleague >suggests it might be (c), but I think I can see situations where I >definitely want to do this. Maybe I'm blinded by my translation into >SELECT statements, however. There are two issues here, one of which depends on the other. The first issue is the question as to whether .. ->[Rx->Ry->Rz] is truly the same as ... ->[Rx->Ry] >temp; temp -> [Rz]. IMO, they are NOT. Consider four objects A, B, C, D connected by four relationships R1-4, where R4 is the composition of the other three. Let's say they're organized as: A [R1] B [R2] C [R3] D [R4=R1+R2+R3] A. Starting from an instance of A, we can traverse to instances of C. via R4 and R3 (ie A [R4->R3] C). We store the result in temp, then we can go from the temporary to B via R2. The implementation of the first relationship traversal A to C is actually going to be A [R1->R2] because of the composition. The second relationship traversal (from C to B via R2) is fine. Had we instead formalized this relationship as a single traversal, then the traversal A [R4->R3->R2] B would have been replaced by the translator, to a simple traversal of R1. Clearly the results are the same whichever way you represent it, but the fact that A [R4->R3->R2] B is the same as A [R1] is lost. The second issue is whether it's OK to access the data of an object 'along the way'. For example, A -> [R4].access_of_D -> [R3->R2] B. Following the logic above, we never even 'visit' D, so what could the statement mean???? I believe it's possible to define a reasonable semantic for this kind of statement--after all, we do seem to have an understanding of its meaning--but to my mind, this violates the underlying meta-model of the language required for translation. -- steve mellor Subject: (SMU) Polymorphic events David Stone writes to shlaer-mellor-users: -------------------------------------------------------------------- What does the method say about how they work? The only description in the official literature consists of the hints in the OOA '96 Report, 5.7.2, 6.3, and figs 6.1 and 6.2. Some issues: (1) What happens if sub/supertype links are being changed while the event is being generated? Fig. 6.2 assumes that the operation "find some instance in some subtype" always works. Does this mean that the architecture must ensure that the generating action waits until a subtype exists? (Presumably, in the same way as in 8.5 a non-self-deletion accessor must wait until the completion of the action of the instance being deleted.) (2) How are polymorphic events treated as regards the rule saying that self-generated events have priority? 5.7.2 talks about the "true receiver"'s being the subtype. This suggests (but no more) that a polymorphic event sent by a subtype instance to its supertype instance should be treated as a self-generated event when it is received by the subtype. (3) What does the remark in 5.7.2 mean: "the state model of the supertype object plays no role in the routing of a polymorphic event"? A state model includes the actions (I guess) and surely such actions may cause subtype migration, thus affecting which subtype object receives polymorphic events. Is the STT meant? (4) What rule prevents loops in the sub/supertype hierarchy? Suppose in R1 A was a supertype of B, and in R2 B was a supertype of A. Then resolving the ultimate destination of a polymorphic event sent to an instance a of A is difficult: does it go to a->R1 or a->R1->R2->R1 or a->R1->R2->R1->R2->R1, or ...? Our architecture bans such loops, and detects them when translating. However, this rule is not given in any official statement of the method: why not? If such loops are permitted, what happens to polymorphic events? (5) May instances of non-leaf objects in a sub/supertype tree (that have state models) receive polymorphic events? 5.7.2 is unclear, but does not say anything definite against this. (6) If the answer to (5) is "yes", do the subtype instances of such objects also receive the event? e.g. consider a case in which object A is a supertype of B, and B of C, and A is passive, B and C active. If a polymorphic event is generated to an instance of A, will it be delivered to the linked B, or to the linked B _and_ the C linked to the B? Figure 6.1 suggests not. (7) What is a _complete_ polymorphic event table, as the word is used in fig 6.1? (This is related to the answers to questions (5) and (6).) For all these, I am most interested in either answers in the official documentation of the method, or in cogent arguments why one or other interpretation must be right. Project Technology, if you're listening, please add these to the list of aspects of the method which need clarification. -- David Stone Sent by courtesy of, but not an official communication from: Simoco Europe, P.O.Box 24, St Andrews Rd, CAMBRIDGE, CB4 1DP, UK Subject: Re: (SMU) Contexts and Naming lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Mellor... This looks like it might be one of those fun threads where I get to maunder on for awhile talking about about something completely different than everybody else... > The first issue is the question as to whether .. ->[Rx->Ry->Rz] is > truly the same as ... ->[Rx->Ry] >temp; temp -> [Rz]. IMO, they are NOT. > > Consider four objects A, B, C, D connected by four relationships R1-4, > where R4 is the composition of the other three. Let's say they're > organized as: A [R1] B [R2] C [R3] D [R4=R1+R2+R3] A. > > Starting from an instance of A, we can traverse to instances of C. > via R4 and R3 (ie A [R4->R3] C). We store the result in temp, > then we can go from the temporary to B via R2. > > The implementation of the first relationship traversal A to C > is actually going to be A [R1->R2] because of the composition. > The second relationship traversal (from C to B via R2) is fine. > > Had we instead formalized this relationship as a single traversal, > then the traversal A [R4->R3->R2] B would have been replaced > by the translator, to a simple traversal of R1. > > Clearly the results are the same whichever way you represent it, > but the fact that A [R4->R3->R2] B is the same as A [R1] is lost. I guess it depends upon what you mean by "truly the same as". It seems to me that from the viewpoint of the OOA they are exactly the same. If the relationship is actually composed in the IM (a la OOA96) then this fact is explicit and is not lost. If the IM relationship is not explicitly composed, then it is up to the analyst to verify that they are the same by doing referential loop analysis. In either case, it seems to me that nothing is lost -- the OOA is broken if you can get to a different instance via different paths, regardless of whether those paths go through a temp. (Assuming all the navigation is done in a single action and that is the atomic unit for referential integrity.) It also seems to me that the translation mechanism is a red herring. Whatever way the translation chooses to handle relationship navigation, the architecture still has to maintain referential integrity so R2 can't be changed between when the temp is identified and when B is accessed. Furthermore, I would think that the translation is free to optimize the navigation to eliminate the temp if it is not otherwise explicitly accessed in the action. Even if one considers simulation where a check is needed to determine if referential integrity has been abused, I don't think the check really cares about the temp or which path was traversed. The check merely has to verify that one got to an instance of B that is correct according to to the IM. > The second issue is whether it's OK to access the data of an object > 'along the way'. For example, A -> [R4].access_of_D -> [R3->R2] B. > Following the logic above, we never even 'visit' D, so what could > the statement mean???? As I indicated above, I think this depends upon how the access is used. In this example, I would think the translation can optimize it out entirely since the only relevant activity is getting to a B and that should be independent of path. However, if one also wants to access other attributes of D in the action as well as traverse to B, then the translation has to provide navigation to both D and B. But it could get B via R1 and then get D via R2 and R3 rather than going through R4. In this case R4 would be irrelevant in the implementation, even though the analyst happened to specify it in the action. Since the IM defines referential integrity insofar as referential loops are concerned, such optimizations should be allowed. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: (SMU) Contexts and Naming "Lynch, Chris D. SDX" writes to shlaer-mellor-users: -------------------------------------------------------------------- >Steve Mellor writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >There are two issues here, one of which depends on the other. > >The first issue is the question as to whether .. ->[Rx->Ry->Rz] is >truly the same as ... ->[Rx->Ry] >temp; temp -> [Rz]. IMO, they are NOT. > >Consider four objects A, B, C, D connected by four relationships R1-4, >where R4 is the composition of the other three. Let's say they're >organized as: A [R1] B [R2] C [R3] D [R4=R1+R2+R3] A. > >Starting from an instance of A, we can traverse to instances of C. >via R4 and R3 (ie A [R4->R3] C). We store the result in temp, >then we can go from the temporary to B via R2. > >The implementation of the first relationship traversal A to C >is actually going to be A [R1->R2] because of the composition. >The second relationship traversal (from C to B via R2) is fine. > >Had we instead formalized this relationship as a single traversal, >then the traversal A [R4->R3->R2] B would have been replaced >by the translator, to a simple traversal of R1. > >Clearly the results are the same whichever way you represent it, >but the fact that A [R4->R3->R2] B is the same as A [R1] is lost. > >The second issue is whether it's OK to access the data of an object >'along the way'. For example, A -> [R4].access_of_D -> [R3->R2] B. >Following the logic above, we never even 'visit' D, so what could >the statement mean???? > >I believe it's possible to define a reasonable semantic for this kind >of statement--after all, we do seem to have an understanding of its >meaning--but to my mind, this violates the underlying meta-model >of the language required for translation. > >-- steve mellor ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ On the first issue, it appears that it is only the composed relationship which invalidates the use of the temporary. Is this true? On the second issue, it seems that your argument is that we don't visit D because having to visit D would prevent the substitution of another relationship (i.e., an optimization.) I think such an argument puts the cart before the horse. Shouldn't we be deciding whether the "grab data as you traverse" semantic gives a desirable expressive power to the models or, conversely, too much rope to get tangled in? -Chris Lynch Abbott AIS San Diego Subject: Re: (SMU) Polymorphic events Carolyn Duby writes to shlaer-mellor-users: -------------------------------------------------------------------- David Stone wrote: > > David Stone writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > > Some issues: > > (1) What happens if sub/supertype links are being changed while the > event is being generated? > > Fig. 6.2 assumes that the operation "find some instance in some > subtype" always works. Does this mean that the architecture must > ensure that the generating action waits until a subtype exists? No. If no subtype instance exists, it is a fatal error. Since the supertype has no state model and cannot accept the event, it is assumed to be abstract and must have exactly 1 corresponding subtype instance. > > (2) How are polymorphic events treated as regards the rule saying that > self-generated events have priority? > > 5.7.2 talks about the "true receiver"'s being the subtype. This > suggests (but no more) that a polymorphic event sent by a subtype > instance to its supertype instance should be treated as a > self-generated event when it is received by the subtype. This is a reasonable assumption to make. > > (3) What does the remark in 5.7.2 mean: "the state model of the > supertype object plays no role in the routing of a polymorphic event"? > > A state model includes the actions (I guess) and surely such actions > may cause subtype migration, thus affecting which subtype object > receives polymorphic events. Is the STT meant? I think it simply means that in order to use polymorphic events, the supertype cannot have a state model. Therefore, the supertype state model can't play any part in routing the events. I think the point is to emphasize that polymorphic event routing is handled by the architecture. > > (4) What rule prevents loops in the sub/supertype hierarchy? > There is no specific rule stating that circular subtype/supertype relationships are incorrect, but I think it is completely reasonable to rule them out of your architecture. Allowing circular sub/supers would cause you much pain. I'm not really even sure what they would mean. > > (5) May instances of non-leaf objects in a sub/supertype tree (that > have state models) receive polymorphic events? > > 5.7.2 is unclear, but does not say anything definite against this. A polymorphic event with the key letters of the supertype would be mapped to the key letters of a non-leaf subtype. > > (6) If the answer to (5) is "yes", do the subtype instances of such > objects also receive the event? e.g. consider a case in which object > A is a supertype of B, and B of C, and A is passive, B and C > active. If a polymorphic event is generated to an instance of A, will > it be delivered to the linked B, or to the linked B _and_ the C linked > to the B? > > Figure 6.1 suggests not. How would the event get routed to B and C? A polymorphic event would have to be mapped to more than 1 true event. > > (7) What is a _complete_ polymorphic event table, as the word is used > in fig 6.1? > > (This is related to the answers to questions (5) and (6).) I think by complete, they mean that the event is mapped to some other event in every possible subtype. If the event was not mapped to all possible subtypes, there would be a possibility of a run-time error if a polymorphic event were directed at an unmapped subtype. Carolyn -- ____________________________________________________ Pathfinder Solutions Inc. www.pathfindersol.com | 888-OOA-PATH | | effective solutions for OOA/RD challenges | | Carolyn Duby voice: +01 508-384-1392| carolynd@pathfindersol.com fax: +01 508-384-7906| ____________________________________________________| Subject: Re: (SMU) Polymorphic events Neil Lang writes to shlaer-mellor-users: -------------------------------------------------------------------- At 12:44 PM 3/25/98 GMT, >David Stone writes to shlaer-mellor-users: >-------------------------------------------------------------------- > > >What does the method say about how they work? The only description in >the official literature consists of the hints in the OOA '96 Report, >5.7.2, 6.3, and figs 6.1 and 6.2. As one of the co-authors of the OOA96 report and someone deeply involved in the work on polymorphic events, I'd like to try to respond to your questions. > >Some issues: > > >(1) What happens if sub/supertype links are being changed while the >event is being generated? > >Fig. 6.2 assumes that the operation "find some instance in some >subtype" always works. Does this mean that the architecture must >ensure that the generating action waits until a subtype exists? >(Presumably, in the same way as in 8.5 a non-self-deletion accessor >must wait until the completion of the action of the instance being >deleted.) The methodology requires that instances of subtypes and supertypes be created at the same time. So there should always be a subtype instance for the polymorphic event. I'd consider an analyst's attempt to generate an event to the subtype prior to or during its creation to be BAD analysis. A related situation involves an event generated to an instance of a subtype, but the instance migrates to another subtype prior to dealing with the event. In this case it's the responsibility of the architecture to deal with that event prior to or after the migration but not during. It's the responsibility of the analyst to ensure that both subtypes can accept the event. >(2) How are polymorphic events treated as regards the rule saying that >self-generated events have priority? > >5.7.2 talks about the "true receiver"'s being the subtype. This >suggests (but no more) that a polymorphic event sent by a subtype >instance to its supertype instance should be treated as a >self-generated event when it is received by the subtype. > > The whole point of a self directed event is that you generate it directly to yourself. I can't see why you would do that polymorphically since you know your subtype. >(3) What does the remark in 5.7.2 mean: "the state model of the >supertype object plays no role in the routing of a polymorphic event"? > >A state model includes the actions (I guess) and surely such actions >may cause subtype migration, thus affecting which subtype object >receives polymorphic events. Is the STT meant? We at PT have long felt that there's a need for a state model at one and only one level in a subtype/supertype hierarchy. So in general there wouldn't be a state model at the supertype. However in the case that an analyst chooses to do so, the supertype state model is not responsible for accepting the polymorphic event and redirecting it (as was once proposed); rather that's the responsibility of the architecture. Think of a polymorphic event as an alias for the real event directed to the specific subtype. The architecture's job is to dealias it, not deliver it to the supertype. > > >(4) What rule prevents loops in the sub/supertype hierarchy? > >Suppose in R1 A was a supertype of B, and in R2 B was a supertype of >A. Then resolving the ultimate destination of a polymorphic event >sent to an instance a of A is difficult: does it go to a->R1 >or a->R1->R2->R1 or a->R1->R2->R1->R2->R1, or ...? Our architecture >bans such loops, and detects them when translating. However, this >rule is not given in any official statement of the method: why not? >If such loops are permitted, what happens to polymorphic events? > > The fundamental concept underlying the sub/supertype construct is that of conservation of number of instances at different levels in the hierarchy. For example if A has B and C as subtypes and C has D and E as subtypes, the total number of instances of A -- N(A)-- must equal N(B)+N(C) and N(C) must equal N(D)+N(E). Structures that violate that are illegal. Consider the following: T has R and S as subtypes R has X and Y as subtypes S has Y and Z as subtypes (Y has both R and S as a supertype) Create an instance of Y. The methodology requires a corresponding instance of both R and S and they both require corresponding instances in T. Thus T must have 2 instances while Y has only 1. Such a structure is syntactically incorrect. I believe a similar argument will apply to your example so I'd epect to to be syntactically incorrect. >(5) May instances of non-leaf objects in a sub/supertype tree (that >have state models) receive polymorphic events? > >5.7.2 is unclear, but does not say anything definite against this. > Again the state models may be associated with interior (non-leaf) nodes in a subtype supertype hierarchy. The only requirement is that there be a state model somewhere in the hierarchy for each instance. > >(6) If the answer to (5) is "yes", do the subtype instances of such >objects also receive the event? e.g. consider a case in which object >A is a supertype of B, and B of C, and A is passive, B and C >active. If a polymorphic event is generated to an instance of A, will >it be delivered to the linked B, or to the linked B _and_ the C linked >to the B? > >Figure 6.1 suggests not. Again since we believe that there's a need for only one state model in the hierarchy, I don't see this ever being an issue. However if an analyst were to choose to do so then the polymorphic mapping tables (created by the analyst) should map the incoming polymorphic event to either the B state model or the C but not both. > > >(7) What is a _complete_ polymorphic event table, as the word is used >in fig 6.1? > >(This is related to the answers to questions (5) and (6).) Simply that there must be complete coverage over all possible subtypes. For example if M has active subtypes L,K,and J, for each polymorphic event to M there must be a mapping defined to each of L K and J. > >For all these, I am most interested in either answers in the official >documentation of the method, or in cogent arguments why one or other >interpretation must be right. > >Project Technology, if you're listening, please add these to the list >of aspects of the method which need clarification. > I hope that this has cleared up some of these issues. Neil ---------------------------------------------------------------------- Neil Lang nlang@projtech.com Project Technology, Inc. 510-567-0255 x623 10940 Bigge Street San Leandro, CA 94577 http://www.projtech.com ---------------------------------------------------------------------- Subject: Re: (SMU) Polymorphic events Neil Lang writes to shlaer-mellor-users: -------------------------------------------------------------------- At 12:01 PM 3/25/98 -0500, >Carolyn Duby writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >David Stone wrote: >> >> David Stone writes to shlaer-mellor-users: >> -------------------------------------------------------------------- >> >> >> Some issues: >> >> (1) What happens if sub/supertype links are being changed while the >> event is being generated? >> >> Fig. 6.2 assumes that the operation "find some instance in some >> subtype" always works. Does this mean that the architecture must >> ensure that the generating action waits until a subtype exists? > >No. If no subtype instance exists, it is a fatal error. Since the It's bad analysis to fail to create the corresponding subtype instance but I don't think I'd shoot the analyst for forgetting to do so :-) >supertype has no state model and cannot accept the event, it is >assumed to be abstract and must have exactly 1 corresponding subtype >instance. Huh??? Could you please explain what you mean by "assumed to be abstract". That concept is not part of the methdology. ....deletia.... Neil ---------------------------------------------------------------------- Neil Lang nlang@projtech.com Project Technology, Inc. 510-567-0255 x623 10940 Bigge Street San Leandro, CA 94577 http://www.projtech.com ---------------------------------------------------------------------- Subject: Re: (SMU) Polymorphic events Carolyn Duby writes to shlaer-mellor-users: -------------------------------------------------------------------- Neil Lang wrote: > It's bad analysis to fail to create the corresponding subtype instance > but I don't think I'd shoot the analyst for forgetting to do so :-) Ok. I suppose that's a little harsh, but what's an architecture to do in a situation like that :-) > > >supertype has no state model and cannot accept the event, it is > >assumed to be abstract and must have exactly 1 corresponding subtype > >instance. > > Huh??? Could you please explain what you mean by "assumed to be > abstract". That concept is not part of the methdology. > Sorry for the confusion. I was down at the implementation level. I meant that when a supertype is instantiated, it must always have a corresponding subtype. This is what you would call abstract in C++. Carolyn PS - Thanks for the thorough response on the polymorphic event issues. -- ____________________________________________________ Pathfinder Solutions Inc. www.pathfindersol.com | 888-OOA-PATH | | effective solutions for OOA/RD challenges | | Carolyn Duby voice: +01 508-384-1392| carolynd@pathfindersol.com fax: +01 508-384-7906| ____________________________________________________| Subject: Re: (SMU) Polymorphic events baryh@airmail.net (Bary Hogan) writes to shlaer-mellor-users: -------------------------------------------------------------------- >Neil Lang writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >At 12:44 PM 3/25/98 GMT, >>David Stone writes to shlaer-mellor-users: >>-------------------------------------------------------------------- >> >>(1) What happens if sub/supertype links are being changed while the >>event is being generated? >> >>Fig. 6.2 assumes that the operation "find some instance in some >>subtype" always works. Does this mean that the architecture must >>ensure that the generating action waits until a subtype exists? >>(Presumably, in the same way as in 8.5 a non-self-deletion accessor >>must wait until the completion of the action of the instance being >>deleted.) > >The methodology requires that instances of subtypes and supertypes >be created at the same time. So there should always be a subtype >instance for the polymorphic event. I'd consider an analyst's >attempt to generate an event to the subtype prior to or >during its creation to be BAD analysis. > >A related situation involves an event generated to an instance of >a subtype, but the instance migrates to another subtype prior to >dealing with the event. In this case it's the responsibility of the >architecture to deal with that event prior to or after the migration >but not during. It's the responsibility of the analyst to ensure >that both subtypes can accept the event. > I think Neil is right on target here, and he has brought up a nuance that I have struggled with in the past. Consider migrating subtypes that are modeled with born and die lifecycles in which the delete state of one subtype generates the create event to the next subtype. I think it necessary to consider this create event to be "self-directed", meaning that it will be received before any other events directed at the same instance. This prevents other events from being received between the time one subtype is deleted and the next is created. This is consistent with the architecture being responsible for _not_ receiving any other events during a migration as Neil has stated above. I think it would be beneficial if the OOA96 Rule for expediting self-directed events were expanded (or clarified) to include this situation. Conceptually, the event really _is_ being directed to the same instance. This situation might also be handled by synchronously creating the next subtype instance from the delete state of the first, but there are disadvantages to that. Bary Hogan LMTAS Subject: Re: (SMU) Contexts and Naming lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lynch... > On the second issue, it seems that your argument is that we don't visit > D because having to visit D would prevent the substitution of another > relationship (i.e., an optimization.) I think such an argument puts the > cart before the horse. Shouldn't we be deciding whether the "grab data > as you traverse" semantic gives a desirable expressive power to the > models or, conversely, too much rope to get tangled in? I agree. You have more succinctly made my point that one should render unto the translation the things that are the translation's and render unto the OOA the specifications. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) Polymorphic events lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lang... > >(2) How are polymorphic events treated as regards the rule saying that > >self-generated events have priority? > We at PT have long felt that there's a need for a state model at one and > only one level in a subtype/supertype hierarchy. So in general > there wouldn't be a state model at the supertype. However in the case > that an analyst chooses to do so, the supertype state model > is not responsible for accepting the polymorphic event and redirecting > it (as was once proposed); rather that's the responsibility of the > architecture. Think of a polymorphic event as an alias for the > real event directed to the specific subtype. The architecture's > job is to dealias it, not deliver it to the supertype. I thought the feeling was more of a passion. Specifically, that supertypes are not instantiated at all -- that the entire hierarchy is embodied in a single instance. Thus having a state machine in the supertype as well as a subtype was simply a notational simplification to eliminate the need to explicitly define redundant states in each subtype state machine. Then OOA96 plugged the notational hole for dealing with externally generated events to the supertype by defining an address modification scheme for he architecture. Under this interpretation, all sub-to-super and super-to-sub events would be self directed because they just go from and to the same instance. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) Migrating Subtypes - Rather not Mike Frankel writes to shlaer-mellor-users: -------------------------------------------------------------------- Bary Hogan wrote: > > >Neil Lang writes to shlaer-mellor-users: > >-------------------------------------------------------------------- > > > >At 12:44 PM 3/25/98 GMT, > >>David Stone writes to shlaer-mellor-users: > >>-------------------------------------------------------------------- > >> > >>(1) What happens if sub/supertype links are being changed while the > >>event is being generated? > >> > >>Fig. 6.2 assumes that the operation "find some instance in some > >>subtype" always works. Does this mean that the architecture must > >>ensure that the generating action waits until a subtype exists? > >>(Presumably, in the same way as in 8.5 a non-self-deletion accessor > >>must wait until the completion of the action of the instance being > >>deleted.) > > > >The methodology requires that instances of subtypes and supertypes > >be created at the same time. So there should always be a subtype > >instance for the polymorphic event. I'd consider an analyst's > >attempt to generate an event to the subtype prior to or > >during its creation to be BAD analysis. > > > >A related situation involves an event generated to an instance of > >a subtype, but the instance migrates to another subtype prior to > >dealing with the event. In this case it's the responsibility of the > >architecture to deal with that event prior to or after the migration > >but not during. It's the responsibility of the analyst to ensure > >that both subtypes can accept the event. > > > > I think Neil is right on target here, and he has brought up a nuance > that I have struggled with in the past. > > Consider migrating subtypes that are modeled with born and die > lifecycles in which the delete state of one subtype generates the > create event to the next subtype. I think it necessary to consider > this create event to be "self-directed", meaning that it will be > received before any other events directed at the same instance. This > prevents other events from being received between the time one subtype > is deleted and the next is created. This is consistent with the > architecture being responsible for _not_ receiving any other events > during a migration as Neil has stated above. > > I think it would be beneficial if the OOA96 Rule for expediting > self-directed events were expanded (or clarified) to include this > situation. Conceptually, the event really _is_ being directed to the > same instance. > > This situation might also be handled by synchronously creating the > next subtype instance from the delete state of the first, but there > are disadvantages to that. > > Bary Hogan > LMTAS It has been my experience that most of the problems introduced into analysis and architectures as a result of modeling objects with migrating subtypes can be eliminated with a simple rule: Migrating subtypes are BAD(to quote Neil) analysis. Now, I know this is a common practice among OOA'ers. However, the practice is not consistent with some basic precepts that SM OOA'ers hold dear: 1) That an instance in the real world, regardless of how many levels of supertype/subtype hierarchy it has been abstracted into, is all levels at once. That is, just because aspects of its supertype half aren't being used or addressed dynamically at some point in time, doesn't mean that we don't recognize its existence in the complete abstraction. The rule is all instance "levels" are created and deleted at the same time, because as an entirety, they fully abstract the real world counterpart. and, 2) That an object's state model represents the *potential* lifecycle of all instances of that object. That is, we allow an instance to take on behaviors of each state as it transitions through its lifecycle. We do not expect that it will be executing state D behaviors while in state B, although we know that if it reaches state D, it will execute those behaviors then. The model accounts for the possibilities. In fact, any particular instance may take an alternate state path (if modeled) and not reach some of the states it had the potential to reach. In my opinion, not instantiating the "alternate" subtype possibilities at the same time as the intial subtype instance carries the same failure to recognize the existence of the complete abstraction as does creating a supertype instance without its subtype. In an example where a supertype Employee has three subtypes, Peon, Manager, and Executive, and there is a recognition that subtype membership is not mutually exclusive for any one instance, our OOA conventions for multiple subtype behavior are as follows: For every creation of a new real-world instance, the analyst must decide (through an assigner, terminator, etc...) exactly which combination of subtypes (one, two or more, or all) that the instance has the *potential* to be. At that point, all instance possibilities at all necessary levels are created. For example, it will be predetermined for every Employee instance created whether it will only ever be a Peon, or whether its potential to rise to Executive status will be recognized through creation now of "dormant" Manager and Executive subtype instances. Dormant instances may likely remain in an Idle or Dormant state until the EXEC1:Promotion event is generated to the instance. One of the problems with subtype migration (defined as deleting one subtype in exchange for another) is that it does not account for the case of overlapping subtype behaviors. For example, it is common for an instance of Employee to perform both Peon and Manager tasks for a while, during the period when his or her responsibility list is changing. In fact, as a full-fledged Manager, the Employee instance may rarely have events generated to its once fruitfully active Peon counterpart, though it does happen sometimes that Managers might jump in and do some Peon work when deadlines are tight. By creating all subtype possibilities at the beginning, we account for a) normal mutually exlusive subtyping b) subtype "migration" c) overlapping subtype behavior We can also easily account for interaction between the subtype counterparts, such as when a Manager assigns himself a programming task, to be performed using Peon behavior, and needs to communicate between his own instance state machines. If you want to model a requirement that says once "migration" occurs, the "from" subtype will never again be active, it can simply transition to a final, dormant state from which no new events can be received. And as always, because this is only analysis, not implementation, space used up by inactive subtype counterparts can be dealt with efficiently through a number of memory management techniques. It has always been my belief that subtype migration is sloppy abstractionism. Mastering its nuances is certainly challenging, both at an analysis and architectural level, so kudos to those that have solved the problems. However, we avoid the problems altogether by recognizing that the required subtypes are all there at all times. Just like any Employee hired as a Peon either has the potential to rise to Manager or Executive status, even if they don't actually achieve it themselves, or is hired directly as an Executive, and probably has a contract that says "I will never do Peon work" thus requiring no Peon subtype instance to be created at all. -- ----------------------------------------------- Mike Frankel Director of Software Engineering Esprit Systems Consulting, Inc. 610-436-8290 fax 610-436-9848 mfrankel@EspritInc.com http://www.EspritInc.com -->Domain Engineering For Reuse -->Vital Link Team Consulting -->BASELINE Domain Models -->Object Methods Training and Consulting -->Structured Methods Training and Consulting "Strategies for Computer and Human Systems" ----------------------------------------------- Subject: Re: (SMU) Migrating Subtypes - Rather not baryh@airmail.net (Bary Hogan) writes to shlaer-mellor-users: -------------------------------------------------------------------- >Mike Frankel writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >It has been my experience that most of the problems introduced into >analysis and architectures as a result of modeling objects with >migrating subtypes can be eliminated with a simple rule: > >Migrating subtypes are BAD(to quote Neil) analysis. I simply disagree. I have found migrating subtypes to be _extremely_ useful. When a real world entity has different lifecycles in different "phases" of its existence, then migrating subtypes are a good way to model each lifecycle in a concise and understandable way. The alternative is to combine all the lifecycles into one huge and unmanageable state model in the supertype. > >Now, I know this is a common practice among OOA'ers. However, the >practice is not consistent with some basic precepts that SM OOA'ers >hold dear: > >1) That an instance in the real world, regardless of how many levels >of supertype/subtype hierarchy it has been abstracted into, is all >levels at once. That is, just because aspects of its supertype half >aren't being used or addressed dynamically at some point in time, >doesn't >mean that we don't recognize its existence in the complete abstraction. >The rule is all instance "levels" are created and deleted at the same >time, >because as an entirety, they fully abstract the real world counterpart. Yes, when an instance is created, you must create an instance at each level, and the same goes for deletions. However, subtype migration is a different issue. In my mind, the actual instance is not deleted, it just stops being one subtype and begins being the other. Of course, you must delete one subtype and create the other, but this is different than creating or deleting the entire instance. I do perceive a slight weakness in the method here, since it is a bad thing for the subtype to be out of existence for any period of time. Therefore, I think that the migration should be instantaneous, but I'm not sure how to represent that with events. I have found that treating the event that migrates the instance as self-directed (per OOA96) solves any practical problems. > >and, 2) That an object's state model represents the *potential* >lifecycle >of all instances of that object. That is, we allow an instance to take >on behaviors of each state as it transitions through its lifecycle. We >do >not expect that it will be executing state D behaviors while in state B, >although we know that if it reaches state D, it will execute those >behaviors >then. The model accounts for the possibilities. In fact, any >particular >instance may take an alternate state path (if modeled) and not reach >some >of the states it had the potential to reach. > > >In my opinion, not instantiating the "alternate" subtype possibilities >at >the same time as the intial subtype instance carries the same failure to >recognize the existence of the complete abstraction as does creating a >supertype instance without its subtype. What do you mean by "instantiating the 'alternate' subtype possibilities"? The instance can only exist as one subtype at a time. > >In an example where a supertype Employee has three subtypes, Peon, >Manager, and >Executive, and there is a recognition that subtype membership is not >mutually >exclusive for any one instance, our OOA conventions for multiple subtype >behavior are as follows: > >For every creation of a new real-world instance, the analyst must decide >(through an assigner, terminator, etc...) exactly which combination of >subtypes (one, two or more, or all) that the instance has the >*potential* to be. >At that point, all instance possibilities at all necessary levels are >created. For example, it will be predetermined for every Employee >instance >created whether it will only ever be a Peon, or whether its potential to >rise >to Executive status will be recognized through creation now of "dormant" >Manager and Executive subtype instances. Dormant instances may likely >remain >in an Idle or Dormant state until the EXEC1:Promotion event is generated >to >the instance. I don't believe that the method allows this. Once again, the instance can only exist in one subtype at a time. There are other alternatives for modeling problems in which an entity serves various roles. > >It has always been my belief that subtype migration is sloppy >abstractionism. Mastering its nuances is certainly challenging, both at >an >analysis and architectural level, so kudos to those that have solved >the problems. However, we avoid the problems altogether by recognizing >that the required subtypes are all there at all times. Just like any >Employee hired as a Peon either has the potential to rise to Manager >or Executive status, even if they don't actually achieve it themselves, >or >is hired directly as an Executive, and probably has a contract that says >"I will never do Peon work" thus requiring no Peon subtype instance to >be created at all. If an instance exists in multiple subtypes at the same time, then how is a polymorphic event routed to just one of the subtypes, or do all of the relevant subtypes receive it? Bary Hogan LMTAS Subject: Re: (SMU) Polymorphic events Neil Lang writes to shlaer-mellor-users: -------------------------------------------------------------------- > > baryh@airmail.net (Bary Hogan) writes to shlaer-mellor-users: > > -------------------------------------------------------------------- > > ...some deletia... > > > Consider migrating subtypes that are modeled with born and die > lifecycles in which the delete state of one subtype generates the > create event to the next subtype. I think it necessary to consider > this create event to be "self-directed", meaning that it will be > received before any other events directed at the same instance. This > prevents other events from being received between the time one subtype > > is deleted and the next is created. This is consistent with the > architecture being responsible for _not_ receiving any other events > during a migration as Neil has stated above. > > I think it would be beneficial if the OOA96 Rule for expediting > self-directed events were expanded (or clarified) to include this > situation. Conceptually, the event really _is_ being directed to the > same instance. > > This situation might also be handled by synchronously creating the > next subtype instance from the delete state of the first, but there > are disadvantages to that. We did some work a few years ago on coordinating the lifecycles of migrating subtypes and quickly realized that it was not as easy as we had expected. Based on the not-complete research then, we observed (as you also noticed) that a combination of asynchronous creation/synchronous deletion or asynchronous deletion/synchronous creation seems to work best. Trying to do both asynchronously is laden with problems. Which is why I personally favor raising the level of representation of subtype migration at the analysis level to a single "migrate" operation. It would eliminate of a lot of pro-forma process modeling currently required to convey the same idea. In addition being a single, atomic (if I can use that word) process, it would be easier to indicate that the architecture must implement the complete migration in an uninterrupted fashion. I have not thought this idea out completely so it may be full of holes, but I'd be interested in getting some feedback from you. > > > Bary Hogan > LMTAS ---------------------------------------------------------------------- Neil Lang nlang@projtech.com Project Technology, Inc. 510-567-0255 x623 10940 Bigge Street San Leandro, CA 94577 http://www.projtech.com ---------------------------------------------------------------------- Subject: Re: (SMU) Polymorphic events Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- Neil Lang wrote: > We did some work a few years ago on coordinating the lifecycles > of migrating subtypes and quickly realized that it was not as easy > as we had expected. Based on the not-complete research then, > we observed (as you also noticed) that a combination of > asynchronous creation/synchronous deletion or asynchronous > deletion/synchronous creation seems to work best. Trying to > do both asynchronously is laden with problems. > > Which is why I personally favor raising the level of representation > of subtype migration at the analysis level to a single "migrate" > operation. It would eliminate of a lot of pro-forma process modeling > currently required to convey the same idea. In addition being a > single, atomic (if I can use that word) process, it would be easier > to indicate that the architecture must implement the complete > migration in an uninterrupted fashion. > > I have not thought this idea out completely so it may be full of > holes, but I'd be interested in getting some feedback from you. The concept of the atomic migration works fairly well with a simple subtype cluster (one supertype, many subtypes); though, as had been noted by others, the state model of the destination subtype my have a creation state. It it sometimes natural for the termination state of one subtype (entered asynchonously) to generate a creation event to another. Many of the problems with this approach seem to stem from non-optimal assignment of responsibility within the model (i.e. the use of "manager" objects) or objects being too tightly coupled. However, once you get more complex subtype clusters, things get more messy. A single, synchonous, migration process would become increasingly complex to specify and map. There are two basic complications, which can then be combined, repeated and extended to increase complexity further: 1. mutilevel hierarchy 2. shared subtype 1. Consider a system with 3 levels of hierarchy. The subtypes in the middle layer are also supertypes. If objects in the middle layer are migrated then subtypes below must also be migrated. The migration operator would need to specify the leaf subtype under the destination node-subtype. i.e. Supertype: TOP Sub/Supertype A, B leaf subtypes AA, AB; BA, BB The migration of A->AA to B->BA must specify both the B object and the BA object. 2. Consider a system with 2 supertypes; each has 2 subtypes. However, they share one of them. i.e. Supertypes A, B Subtypes AA, AB, BB. There are only two possible states of this cluster: A->AA, B->BB; or A,B->AB. So a migration out of subtype AB must create both AA and BB; and a migration from AA to AB must also migrate BB to AB. These cases, which can easily be extended, demonstrate the problems with any attempt to move away from atomic manipulation of subtype relationships. I'm not sure that a generally applicable migration operator can exist. If you chose to provide a limited migration operator then the model becomes fragile. A system may work well with the simple operator, but then be invalided by the addition of a subtype relationship to an existing subtype object. That would be very undesirable. (And then, there is the complication caused by the link/unlink semantics of SMALL: the identity of the suptype objects no longer defines identity of their parent - they must be explicity linked. Call me a ludite, but I find this linking to be unnecessary) Dave. Not speaking for Mitel Semiconductor. -- Dave Whipp, Embedded Systems Group, Mitel Semiconductor, Plymouth, United Kingdom. PL6 7BQ. tel. +44 (0)1752 693277 mailto:david_whipp@mitel.com fax. +44 (0)1752 693306 http://www.gpsemi.com Subject: Re: (SMU) Polymorphic events lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > 1. Consider a system with 3 levels of hierarchy. The subtypes in > the middle layer are also supertypes. If objects in the middle > layer are migrated then subtypes below must also be migrated. > The migration operator would need to specify the leaf subtype > under the destination node-subtype. > > i.e. > Supertype: TOP > Sub/Supertype A, B > leaf subtypes AA, AB; BA, BB > > The migration of A->AA to B->BA must specify both the B object > and the BA object. > > 2. Consider a system with 2 supertypes; each has 2 subtypes. > However, they share one of them. i.e. > > Supertypes A, B > Subtypes AA, AB, BB. > > There are only two possible states of this cluster: A->AA, B->BB; > or A,B->AB. > > So a migration out of subtype AB must create both AA and BB; and a > migration from AA to AB must also migrate BB to AB. First, I am bothered by the second example. The implication is that only AB inherits data from both supertypes. If this is the case, then subtype migration would be invalid -- I believe migratable subtypes must be siblings to common parents. This becomes clear when one is using compound identifiers where some of the subtype identifiers are from the supertype keys -- if the subtypes have different keys, they are different objects and can't migrate. Second, I don't see the general problem. When a subtype is migrated, its parent supertypes are unmodified. This is why it is *sub*type migration. The only thing that might change if one literally did a delete/create would be for the sub/supertype relationship to change -- but this would already have been handled explicitly by writing to the referential attributes or by unlink/link in the action. As it happens, I see the delete/create as a kludge for migration precisely because one would technically have to modify the sub/supertype relationship explicitly in the action (i.e., you have to remove the relationship prior to deleting). The reality is that it would generally be highly inefficient to literally do a delete followed by a create. Ideally you don't want to reallocate the data store and you don't want to touch the sub/super relationship in the implementation; all you want to do is change some attribute values. This is why I would prefer to see a special Migrate process to handle subtype migration more cleanly and more abstractly. Note that a Migrate process makes things much easier for the translator if the architecture does not instantiate supertypes for performance reasons. Now there is only one composite instantiation of the data store as a subtype and deletes can get tricky because of supertype relationships. However, Migrate removes the burden of exception processing from an already complex task when subtype migration is possible (e.g., the translator does not have to do complex action analysis for each delete to figure what type of processing is required). > (And then, there is the complication caused by the link/unlink > semantics of SMALL: the identity of the suptype objects no longer > defines identity of their parent - they must be explicity linked. > Call me a ludite, but I find this linking to be unnecessary) Despite my long standing policy of never agreeing with ludites, I have to point out that it wouldn't be necessary if there were a Migrate process. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) Polymorphic events Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > > lahman writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Responding to Whipp... > > 2. Consider a system with 2 supertypes; each has 2 subtypes. > > However, they share one of them. i.e. > > > > Supertypes A, B > > Subtypes AA, AB, BB. > > > > There are only two possible states of this cluster: A->AA, B->BB; > > or A,B->AB. > > > > So a migration out of subtype AB must create both AA and BB; and a > > migration from AA to AB must also migrate BB to AB. > > First, I am bothered by the second example. The implication is that > only AB inherits data from both supertypes. If this is the case, > then subtype migration would be invalid -- I believe migratable > subtypes must be siblings to common parents. This becomes clear > when one is using compound identifiers where some of the subtype > identifiers are from the supertype keys -- if the subtypes have > different keys, they are different objects and > can't migrate. I agree with the last sentence, but let me give a specific example where it doesn't apply. I have a model of a parallel port. The port has a number of pins, each of which is associated with a channel (generally, each channel controls 8 or 16 pins - but that isn't important). Each channel is associated with an operating mode, which defines the protocol for moving data into, or out of, the parallel port. The parallel port is bidirectional. There exists a "data direction" register which controls the direction of each pin. The protocols for each direction are independent. There are a number of input protocols and a number of output protocols. For example, protocols such as continuous-input, strobed-input, continuous-output, strobed-output are available. (this gives 4 possible combinations; each combination is known as an operating mode). This is fairly straight forward: 2 independent subtype trees. The two parents: input-protocol and output-protocol, have identical identifiers, and are connected by a 1:1 relationship. There is actually a triangle of relationships: the Channel object has a 1:1 relationship with both of its protocols. All objects in the triangle have the same identifier. The subtypes of the input and output protocol objects define the actual protocols; and migration of one or both subtype relationships occurs when the mode is changed. However, there is a special mode in which the data-direction register is overridden by a single, bidirectional, protocol. In this case both the input an output protocols are migrated onto a shared "strobed-bidrectional" protocol subtype. Migration is obviously necessary, and possible. In the hardware there is a register with a 3-bit field that selects between the 5 possible modes for a channel. when the value is changed, the subtypes must migrate. Moving into and out of the shared subtype is easy when migration is done with delete and create operators. (In my CASE tool, I don't even need to link/unlink any relationships - the ref. attr in the subtype ahndles it). Using a simple migration operator for this is more difficult. It would have to be aware of the need to delete 2 subtypes and create 1; or to delete one and create 2. Obviously, the architecture needs to handle this; but thats no problem because our's does :-). > > 1. Consider a system with 3 levels of hierarchy. The subtypes in > > the middle layer are also supertypes. If objects in the middle > > layer are migrated then subtypes below must also be migrated. > > The migration operator would need to specify the leaf subtype > > under the destination node-subtype. > > > > i.e. > > Supertype: TOP > > Sub/Supertype A, B > > leaf subtypes AA, AB; BA, BB > > > > The migration of A->AA to B->BA must specify both the B object > > and the BA object. > > Second, I don't see the general problem. When a subtype is migrated, > its parent supertypes are unmodified. This is why it is *sub*type > migration. The only thing that might change if one literally did a > delete/create would be for the sub/supertype relationship to change > -- but this would already have been handled explicitly by writing to > the referential attributes or by unlink/link in the action. Correct, the parents aren't modified, but that wasn't the point. The subtypes themselves, including any of their subtypes, are modified (deleted/created). All levels in the hierarchy may have attributes, and may form relationships with other objects. Migration of a middle-layer object may require many other changes to keep the model consistant. A specific migration operator would need to be very powerful to fully specify all the possible varients. Leon Starr's book on SM has some interesting examples of complex subtype networks. For simple cases, a simple migration operator makes life simpler. For complex cases, it makes things more difficult. This is a problem. Dave. -- Dave Whipp, Embedded Systems Group, Mitel Semiconductor, Plymouth, United Kingdom. PL6 7BQ. tel. +44 (0)1752 693277 mailto:david_whipp@mitel.com fax. +44 (0)1752 693306 http://www.gpsemi.com Subject: Re: (SMU) Migrating Subtypes - Rather not lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Frankel... I misfiled this mail, so I am slow getting back to it. Sorry about that. As it happens, we don't use subtype migration very often. However, I believe that when it is appropriate -- almost always in role-playing situations -- it is extremely useful in describing reality. Moreover, I think the modeling alternative you offer is not consistent with the real world in role-playing situations. > However, the > practice is not consistent with some basic precepts that SM OOA'ers > hold dear: > > 1) That an instance in the real world, regardless of how many levels > of supertype/subtype hierarchy it has been abstracted into, is all > levels at once. That is, just because aspects of its supertype half > aren't being used or addressed dynamically at some point in time, > doesn't > mean that we don't recognize its existence in the complete abstraction. > The rule is all instance "levels" are created and deleted at the same > time, > because as an entirety, they fully abstract the real world counterpart. I do not see an inconsistency. Creating and deleting all supertypes is just a mechanism for maintaining consistency. The real requirement is that the application be consistent at the end of the action. If you are just deleting a subtype, then you would, indeed, have to delete all the supertypes before exiting the action to maintain consistency. However, for subtype migration all you have to ensure is that the complete hierarchy is intact at the end of the action -- which it will be so long as the subtype create and delete are both done in that action. In fact, during migration the supertype hierarchy is completely untouched. > 2) That an object's state model represents the *potential* > lifecycle > of all instances of that object. That is, we allow an instance to take > on behaviors of each state as it transitions through its lifecycle. We > do > not expect that it will be executing state D behaviors while in state B, > although we know that if it reaches state D, it will execute those > behaviors > then. The model accounts for the possibilities. In fact, any > particular > instance may take an alternate state path (if modeled) and not reach > some > of the states it had the potential to reach. There is only one state model for each instance and there is no confusion about which state model should execute for a given subtype. The analyst is responsible for ensuring that when a subtype is migrated it is placed in state in the new state machine that is appropriate for subsequent processing. It seems to me that the ability to take advantage of this is the core reason for wanting to do subtype migration. One of the main reasons for migrating subtypes is to describe fundamentally different roles in different circumstances for the same entity. For example, we have a Tester Pin with a certain suite of attributes and it represents a very concrete real world entity. Each Tester Pin is unique, with a pin number that identifies its physical location, and there is always exactly one at each location. However, it plays very different roles depending upon the testing context. It can provide a DC bias, it can detect a signal, or it can short to ground. The same physical entity plays multiple roles with quite different behaviors and interactions with the rest of the system depending upon how it has been connected. The act of connecting the Tester Pin determines which subtype role is appropriate. The trick is that within the context of a single Test the same Tester Pin may by connected in different ways at different times. The Tester Pin is the same one all the time; the only thing changing is its relationship to the rest of the world. In this sort of situation subtype migration is a boon; any other modeling solution would be a kludge at best and downright misleading at worst. > In my opinion, not instantiating the "alternate" subtype possibilities > at > the same time as the intial subtype instance carries the same failure to > recognize the existence of the complete abstraction as does creating a > supertype instance without its subtype. If you want to instantiate all possibilities, how would you capture the fact in the above example that the alternate possibilites cannot coexist at the same moment in time? Instantiating the alternatives strikes me as highly misleading about what is actually happening -- it implies coexistence of different real world entities. > In an example where a supertype Employee has three subtypes, Peon, > Manager, and > Executive, and there is a recognition that subtype membership is not > mutually > exclusive for any one instance, our OOA conventions for multiple subtype > behavior are as follows: > > For every creation of a new real-world instance, the analyst must decide > (through an assigner, terminator, etc...) exactly which combination of > subtypes (one, two or more, or all) that the instance has the > *potential* to be. > At that point, all instance possibilities at all necessary levels are > created. For example, it will be predetermined for every Employee > instance > created whether it will only ever be a Peon, or whether its potential to > rise > to Executive status will be recognized through creation now of "dormant" > Manager and Executive subtype instances. Dormant instances may likely > remain > in an Idle or Dormant state until the EXEC1:Promotion event is generated > to > the instance. This seems to assume a single state machine in the supertype that would apply to all subtypes. That would be one Seriously Ugly state machine in most cases where subtype migration is relevant. However, my objection is not the clumsiness of the state machine. The dormant subtypes clearly misrepresent what is actually happening. They imply that there are concrete realities (e.g.., multiple Tester Pins at a given location) present that are simply not there at a given moment in time -- there is only one instance in existence at a time in the real world. I believe the best way to think of subtype migration is to view the supertype as the concrete entity and the subtypes as mutually exclusive roles that it can play through time. There is only one real world instance in a role-playing situation so I think that instantiating multiple instances is, at best, highly misleading from a modeling view. > One of the problems with subtype migration (defined as deleting one > subtype in > exchange for another) is that it does not account for the case of > overlapping > subtype behaviors. That can be easily handled through a state machine in the supertype that describes the common behavior for the subtypes, which is exactly what the notation is designed to do.-- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) Migrating Subtypes - Rather not Mike Frankel writes to shlaer-mellor-users: -------------------------------------------------------------------- Bary Hogan wrote: > > baryh@airmail.net (Bary Hogan) writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > >Mike Frankel writes to shlaer-mellor-users: > >-------------------------------------------------------------------- > > > >It has been my experience that most of the problems introduced into > >analysis and architectures as a result of modeling objects with > >migrating subtypes can be eliminated with a simple rule: > > > >Migrating subtypes are BAD(to quote Neil) analysis. > > I simply disagree. I have found migrating subtypes to be _extremely_ > useful. When a real world entity has different lifecycles in > different "phases" of its existence, then migrating subtypes are a > good way to model each lifecycle in a concise and understandable way. Some clarification is warranted here. I also agree that the notion of "migrating" from one phase of existence to another is extremely useful to model. My issue is with the formalism chosen to model this phenomenon. Most OOA'ers chosen technique is to delete one subtype instance and replace it with an instance of another subtype, performing chenanigans necessary to preserve supertype id values and to not get "caught" with your relationship integrity pants down when events arrive. Our chosen technique is to create the supertype instance and *all* subtype instances that represent the *potential* phases of that particular instance. As an aside, we choose to model subtype "phases" only when there is a certain degree of autonomy in the dynamics between the phases. We don't use subtype phases to model the normal circular lifecycle of an instance, where sequence among states across subtypes is clearly present. While the Object Lifecycles book is a wonderful treatise of OOA in so many ways, the particular example given on p.59 (Mixing Tank with Assigned and Unassigned subtypes) is a dreadful example of when the subtype phasing concept should be applied. It is unfortunate that this example has played a large part in defining when subtype phasing (migration) should be modeled. > The alternative is to combine all the lifecycles into one huge and > unmanageable state model in the supertype. That is not the alternative I am suggesting (see above). > > > > >Now, I know this is a common practice among OOA'ers. However, the > >practice is not consistent with some basic precepts that SM OOA'ers > >hold dear: > > > >1) That an instance in the real world, regardless of how many levels > >of supertype/subtype hierarchy it has been abstracted into, is all > >levels at once. That is, just because aspects of its supertype half > >aren't being used or addressed dynamically at some point in time, > >doesn't > >mean that we don't recognize its existence in the complete abstraction. > >The rule is all instance "levels" are created and deleted at the same > >time, > >because as an entirety, they fully abstract the real world counterpart. > > Yes, when an instance is created, you must create an instance at each > level, and the same goes for deletions. However, subtype migration is > a different issue. In my mind, the actual instance is not deleted, it > just stops being one subtype and begins being the other. Of course, > you must delete one subtype and create the other, but this is > different than creating or deleting the entire instance. I do > perceive a slight weakness in the method here, since it is a bad thing > for the subtype to be out of existence for any period of time. > Therefore, I think that the migration should be instantaneous, but I'm > not sure how to represent that with events. I have found that > treating the event that migrates the instance as self-directed (per > OOA96) solves any practical problems. It is a *very* bad thing for the subtype to be out existence for any amount of time. That is precisely why we prefer to create all subtype options for that instance at the start. Then there is never a problem with "what if its not created yet". There is also no need for any special MIGRATE operations in the action language. We need only rely on the existing dynamic formalizm of events and states to preserve model correctness. > > > > > >and, 2) That an object's state model represents the *potential* > >lifecycle > >of all instances of that object. That is, we allow an instance to take > >on behaviors of each state as it transitions through its lifecycle. We > >do > >not expect that it will be executing state D behaviors while in state B, > >although we know that if it reaches state D, it will execute those > >behaviors > >then. The model accounts for the possibilities. In fact, any > >particular > >instance may take an alternate state path (if modeled) and not reach > >some > >of the states it had the potential to reach. > > > > > >In my opinion, not instantiating the "alternate" subtype possibilities > >at > >the same time as the intial subtype instance carries the same failure to > >recognize the existence of the complete abstraction as does creating a > >supertype instance without its subtype. > > What do you mean by "instantiating the 'alternate' subtype > possibilities"? I mean creating an Employee instance, a Peon instance, a Manager instance, and an Executive instance all at once, upon initial creation of the instance in the model. And, they are all deleted at the same time upon termination of the instance in the model. > The instance can only exist as one subtype at a time. That seems to be the prevailing belief on this user group. So I took a look at the text of Object Lifecycles to see what exactly it said. On page 46, under the section entitled "What an action must do/ Leave subtypes and supertypes consistent" you will find the following: "The action must leave subtypes and supertypes consistently populated. Therefore, if an action creates an instance of the supertype object, it must also create an instance of exactly one of the subtype objects. Similarly, if an action deletes an instance of a subtype, it must also delete the corresponding instance of the supertype" Now clearly, our technique of creating more than one subtype instance at a time is in violation of the phrase: "...exactly one of the subtype objects". Unfortunately, the assumption that everyone else is making that subtype migration requires deleting one subtype and creating a new one, while *not* deleting the "...corresponding instance of the supertype", is also clearly in violation of the above paragraph. And there is no superseding formal publication which changes or modifies these rules. So, all bets are off. We are all free to explore the best techniques available, because according to the Object Lifecycles book itself, subtype migration (as commonly modeled) should be illegal. > > >However, we avoid the problems altogether by recognizing > >that the required subtypes are all there at all times. Just like any > >Employee hired as a Peon either has the potential to rise to Manager > >or Executive status, even if they don't actually achieve it themselves, > >or > >is hired directly as an Executive, and probably has a contract that says > >"I will never do Peon work" thus requiring no Peon subtype instance to > >be created at all. > > If an instance exists in multiple subtypes at the same time, then how > is a polymorphic event routed to just one of the subtypes, or do all > of the relevant subtypes receive it? It depends on how the analyst modeled it. If both subtype couterparts are in a state that can receive the polymorphic event, they both will get it simultaneously. If the analyst wants to ensure that only one subtype can process it, he will need to ensure that the other subtype(s) are in states that will not process the event. -- ----------------------------------------------- Mike Frankel Director of Software Engineering Esprit Systems Consulting, Inc. 610-436-8290 fax 610-436-9848 mfrankel@EspritInc.com http://www.EspritInc.com -->Domain Engineering For Reuse -->Vital Link Team Consulting -->BASELINE Domain Models -->Object Methods Training and Consulting -->Structured Methods Training and Consulting "Strategies for Computer and Human Systems" ----------------------------------------------- Subject: Re: (SMU) Migrating Subtypes - Rather not Mike Frankel writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > > lahman writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Responding to Frankel... > > I misfiled this mail, so I am slow getting back to it. Sorry about that. I think you are the only one holding yourself to the "respond within two seconds" standard. No problem. > > As it happens, we don't use subtype migration very often. Good for you. However, I believe that > when it is appropriate -- almost always in role-playing situations -- it is extremely > useful in describing reality. So do I. See my response to Barry Hogan. Moreover, I think the modeling alternative you offer > is not consistent with the real world in role-playing situations. Very little we do in object models is "consistent" with the real world. I don't remember tripping over any supertype/subtype hierarchies in my living room. Its all abstractions and techniques for capturing them. We make up the rules, not the real world. > > > The rule is all instance "levels" are created and deleted at the same > > time, > > because as an entirety, they fully abstract the real world counterpart. > > I do not see an inconsistency. Creating and deleting all supertypes is just a > mechanism for maintaining consistency. The real requirement is that the application > be consistent at the end of the action. If you are just deleting a subtype, then you > would, indeed, have to delete all the supertypes before exiting the action to > maintain consistency. However, for subtype migration all you have to ensure is that > the complete hierarchy is intact at the end of the action -- which it will be so long > as the subtype create and delete are both done in that action. In fact, during > migration the supertype hierarchy is completely untouched. Now you are making up rules. Sure that all sounds like a wonderful set of rules. Where did they come from? > > > There is only one state model for each instance and there is no confusion about which > state model should execute for a given subtype. That is the SM rule, but at the same time, it is not always consistent with the real world role playing you mentioned above. Instances often play multiple autonomous (or mostly autonomous) roles at the same time. Where is that consistency in your scheme? The analyst is responsible for > ensuring that when a subtype is migrated it is placed in state in the new state > machine that is appropriate for subsequent processing. It seems to me that the > ability to take advantage of this is the core reason for wanting to do subtype > migration. More wonderful rules. > > One of the main reasons for migrating subtypes is to describe fundamentally different > roles in different circumstances for the same entity. For example, we have a Tester > Pin with a certain suite of attributes and it represents a very concrete real world > entity. Each Tester Pin is unique, with a pin number that identifies its physical > location, and there is always exactly one at each location. However, it plays very > different roles depending upon the testing context. It can provide a DC bias, it can > detect a signal, or it can short to ground. The same physical entity plays multiple > roles with quite different behaviors and interactions with the rest of the system > depending upon how it has been connected. The act of connecting the Tester Pin > determines which subtype role is appropriate. > > The trick is that within the context of a single Test the same Tester Pin may by > connected in different ways at different times. The Tester Pin is the same one all > the time; the only thing changing is its relationship to the rest of the world. In > this sort of situation subtype migration is a boon; any other modeling solution would > be a kludge at best and downright misleading at worst. Assuming your migrating subtypes are under an associative Connection Test object, as opposed to the Tester Pin itself, its role migration seems like a good example. However it makes a better case for our technique, because your Tester Pin is bouncing back and forth between test subtypes like a ping pong ball. That is alot of subtype creation and deletion integrity to deal with. We would just create all Connection Test subtype instances necessary for this test run on a particular Tester Pin, and use their dynamics as necessary, *including simulataneously*, when required. > > > In my opinion, not instantiating the "alternate" subtype possibilities > > at > > the same time as the intial subtype instance carries the same failure to > > recognize the existence of the complete abstraction as does creating a > > supertype instance without its subtype. > > If you want to instantiate all possibilities, how would you capture the fact in the > above example that the alternate possibilites cannot coexist at the same moment in > time? Instantiating the alternatives strikes me as highly misleading about what is > actually happening -- it implies coexistence of different real world entities. We say they can coexist, just that you choose not to use them at the same time. There are other examples, such as the Employee, where roles not only coexist, but execute concurrently and simultaneously in the real world. Your scheme does not account for that possibility. > > > > For every creation of a new real-world instance, the analyst must decide > > (through an assigner, terminator, etc...) exactly which combination of > > subtypes (one, two or more, or all) that the instance has the > > *potential* to be. > > At that point, all instance possibilities at all necessary levels are > > created. For example, it will be predetermined for every Employee > > instance > > created whether it will only ever be a Peon, or whether its potential to > > rise > > to Executive status will be recognized through creation now of "dormant" > > Manager and Executive subtype instances. Dormant instances may likely > > remain > > in an Idle or Dormant state until the EXEC1:Promotion event is generated > > to > > the instance. > > This seems to assume a single state machine in the supertype that would apply to all > subtypes. That would be one Seriously Ugly state machine in most cases where subtype > migration is relevant. Nowhere in this discussion has the possibility of a supertype state machine been suggested. This is soley about subtype state machines. Don't get me started on state machines at super and subtype levels. That is a whole other divergence of opinion - though I have found more agreement on that point. > > However, my objection is not the clumsiness of the state machine. The dormant > subtypes clearly misrepresent what is actually happening. They imply that there are > concrete realities (e.g.., multiple Tester Pins at a given location) present that are > simply not there at a given moment in time -- there is only one instance in existence > at a time in the real world. I believe the best way to think of subtype migration is > to view the supertype as the concrete entity and the subtypes as mutually exclusive > roles that it can play through time. There is only one real world instance in a > role-playing situation so I think that instantiating multiple instances is, at best, > highly misleading from a modeling view. I think that is a limiting perspective. My experience is that it is less misleading, to the point of being explicitly direct, because: a) It allows for multiple roles to played simultaneously, which is much more consistent with what happens in the real world than assuming roles are always mutually exclusive, while at the same time providing for exclusivity when necessary through dormant states in "non-active" subtypes, and b) It addresses the issue of a requirement in the model that is missing: That certain real world instances have a limited potential. It is not the case that every instance can always migrate to every role. By establishing the potential at creation time, we have another tool for bounding dynamic execution and moving more testing to the analysis phase. > > > One of the problems with subtype migration (defined as deleting one > > subtype in > > exchange for another) is that it does not account for the case of > > overlapping > > subtype behaviors. > > That can be easily handled through a state machine in the supertype that describes > the common behavior for the subtypes, which is exactly what the notation is designed > to do.-- "Overlapping" is not referring to case, it is referring to concurrency. -- ----------------------------------------------- Mike Frankel Director of Software Engineering Esprit Systems Consulting, Inc. 610-436-8290 fax 610-436-9848 mfrankel@EspritInc.com http://www.EspritInc.com -->Domain Engineering For Reuse -->Vital Link Team Consulting -->BASELINE Domain Models -->Object Methods Training and Consulting -->Structured Methods Training and Consulting "Strategies for Computer and Human Systems" ----------------------------------------------- Subject: Re: (SMU) Polymorphic events lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > The parallel port is bidirectional. There exists a "data direction" > register which controls the direction of each pin. The protocols > for each direction are independent. There are a number of input > protocols and a number of output protocols. For example, > protocols such as continuous-input, strobed-input, continuous-output, > strobed-output are available. (this gives 4 possible combinations; > each combination is known as an operating mode). > > This is fairly straight forward: 2 independent subtype trees. > The two parents: input-protocol and output-protocol, have identical > identifiers, and are connected by a 1:1 relationship. There is > actually a triangle of relationships: the Channel object has a 1:1 > relationship with both of its protocols. All objects in the triangle > have the same identifier. The subtypes of the input and output > protocol objects define the actual protocols; and migration of one > or both subtype relationships occurs when the mode is changed. I am bothered by having two objects, input-protocol and output-protocol, that have identical identifiers. If this is a coincidence of value, then fine; but if it is a semantic equality I think that would be a no-no. In particular, if they each have a single identifier and that single identifier was also the ref attr for the 1:1 between them, then I think there is a problem. > However, there is a special mode in which the data-direction > register is overridden by a single, bidirectional, protocol. > In this case both the input an output protocols are migrated > onto a shared "strobed-bidrectional" protocol subtype. Wouldn't the need for this go away if you had a single supertype, Protocol, that had five subtypes that were migrated (the four strobe/continous vs. input/output combinations and the strobed-bidirectional)? Then Channel would be 1:1 with Protocol and the migration would be traditionally simple. [I am also confused about where the parallel port fits in that has the channels. I suspect I would have been tempted to make Parallel Port an object, rather than Protocol, and would have given it five subtypes and a 1:M to Channel. That way Channel and Parallel Port are clearly concrete in the problem space and the Parallel Port subtypes reflect the roles of the port over time. But I haven't seen the rest of the application and I tend to like role players for migration.] > Migration is obviously necessary, and possible. In the hardware > there is a register with a 3-bit field that selects between the > 5 possible modes for a channel. when the value is changed, the > subtypes must migrate. Moving into and out of the shared > subtype is easy when migration is done with delete and > create operators. (In my CASE tool, I don't even need to > link/unlink any relationships - the ref. attr in the subtype > ahndles it). This says to me that the identifiers for input-protocol, output-protocol, and strobed-bidirectional are semantically equivalent. If that is the case they can have only one supertype parent. > Using a simple migration operator for this is more difficult. > It would have to be aware of the need to delete 2 subtypes and > create 1; or to delete one and create 2. Obviously, the > architecture needs to handle this; but thats no problem because > our's does :-). This is the part that makes me queasy, though I have no concrete refutation other than the semantic equivalence of the objects. How did the two subtypes become active at the same time? I would think that the port can only being doing one thing at a time (albeit maybe on half a clock cycle). > Correct, the parents aren't modified, but that wasn't the point. > The subtypes themselves, including any of their subtypes, are > modified (deleted/created). All levels in the hierarchy may have > attributes, and may form relationships with other objects. Migration > of a middle-layer object may require many other changes to keep > the model consistant. A specific migration operator would need to > be very powerful to fully specify all the possible varients. Leon > Starr's book on SM has some interesting examples of complex subtype > networks. Well I have never seen a case where one needed to migrate a subtype that was also a supertype. I belong to the school that holds that only leaf subtypes should be migrated. The problem with migrating intermediate objects is that this limits the implementation to architectures that actually instantiate supertypes. As an Old Cycle Counter, I think such architectures should be banned and barred, forbidden fare. (Sorry, I occasionally get these inexplicable urges to quote Byron. I had an unfortunate childhood encounter with an iambic pentameter.) -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: (SMU) Polymorphic events David Stone writes to shlaer-mellor-users: -------------------------------------------------------------------- I am pleased to receive such a rapid response to my previous posting about this, but from the replies it is obvious that I didn't explain myself clearly. I was trying not to prejudice the issues, but in doing so was too vague. Here's an attempt to clarify them. (1) Although it is clear in the method that there must in general always be precisely one subtype instance for each supertype instance, the method does not have any "multiple creation accessor" which would create all the instances at once. Thus there is bound to be a time during which a supertype instance exists without a subtype, or vice-versa. My question is: what happens to polymorphic events at those times (and analogous times during deletion and migration)? Carolyn Duby, and I think Neil Lang, were saying that it is a run-time error; this seems quite harsh to me. It can be difficult for the analyst to keep track of these situations, especially when following the concurrent interpretation of time. (2) Concerning the priority of polymorphic events to self, we must have some rule. We could say (as Neil seemed to be suggesting) that such events are not permitted, though I think there are cases when you wish to generate an event to an instance and you don't know (at analysis time) whether it is a self-directed event. What if an instance finds (by a selective read accessor) an instance of its supertype object, and then wishes to send that instance a polymorphic event? In such a case it complicates the analysis to have to check whether it is in fact, by polymorphism, a self-directed event, and send it monomorphically if so. ((3) was just a request for the wording of OOA'96 to be clarified.) (4) Concerning loops in supertype/subtype "hierarchies", it is quite true, as Neil says, that in most cases these cannot occur because of the constraints on the numbers of instances. However, the degenerate case, in which a supertype object has only one subtype, is permitted (as far as I know) in the method, and so an architecture must do something with such a case. We all seem to agree that such loops should be banned: all that is needed is that the official documentation of the method say so in future. (5), (6), (7): These were all about the possible positions of active objects in the supertype/subtype hierarchy, and what a complete polymorphic event table was. I gathered that both Carolyn and Neil were implying that each polymorphic event should be mapped at run-time to exactly one event. To state what I think they said more formally, for each polymorphic event label consider all the paths in the tree, from the object to which the polymorphic event is directed, to the leaf nodes. For every such path there must be precisely one entry in the polymorphic event table, which must map the polymorphic event label to a plain event label, of an event directed to one of the objects on that path. I think this is the definition of "complete". This is not what either Neil or Carolyn said exactly, but I think it's the generalized expression of what they meant. I do notice that Neil uses "instance" to mean what I mean when I say "all related instances in a super/subtype hierarchy from the root to the leaf". This usage seems different from the established usage in e.g. Modeling the World in States p.29. -- David Stone Sent by courtesy of, but not an official communication from: Simoco Europe, P.O.Box 24, St Andrews Rd, CAMBRIDGE, CB4 1DP, UK Subject: Re: (SMU) Polymorphic events Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > > However, there is a special mode in which the data-direction > > register is overridden by a single, bidirectional, protocol. > > In this case both the input an output protocols are migrated > > onto a shared "strobed-bidrectional" protocol subtype. > > Wouldn't the need for this go away if you had a single supertype, > Protocol, that had five subtypes that were migrated (the four > strobe/continous vs. input/output combinations and the > strobed-bidirectional)? Then Channel would be 1:1 with Protocol > and the migration would be traditionally simple. When you combine state machines, you multiply the number of states. Thus, if you assume that each of my "traditional" subtypes has 3 states, then your suggestion would cause an increase of states in each of the 4 substates from 3 to 9. There would also be a lot of repetition of behaviour, because combined state machines require the action to be repeated in each place where the state is used. > [I am also confused about where the parallel port fits in that has > the channels. I suspect I would have been tempted to make Parallel > Port an object, rather than Protocol, and would have given it five > subtypes and a 1:M to Channel. That way Channel and Parallel Port > are clearly concrete in the problem space and the Parallel Port > subtypes reflect the roles of the port over time. But I haven't > seen the rest of the application and I tend to like role players > for migration.] The parallel port is the application :-). The domain name is "parallel port" and it contains an object called "channel" that has many instances. Each channel is associated with a number of data pins (configured at run time); the transfer of data over those pins is determined by the protocol(s) that is (are) currently selected for the channel. > > Migration is obviously necessary, and possible. In the hardware > > there is a register with a 3-bit field that selects between the > > 5 possible modes for a channel. when the value is changed, the > > subtypes must migrate. Moving into and out of the shared > > subtype is easy when migration is done with delete and > > create operators. > This says to me that the identifiers for input-protocol, > output-protocol, and strobed-bidirectional are semantically > equivalent. If that is the case they can have only one supertype > parent. Why? I would say that a subtype can only have two supertypes if the identifiers of the two supertypes are semantically equivelent. Indeed, the common subtype explicity requires that they are semantically equivelent. You may wish to examine Figure 2.6.3 on page 30 of "Object Lifecyles". Try migrating any of the subtypes and note the effect on the other relationships. Assume that the bank is extreemely customer friendly and allows these migrations. > > Using a simple migration operator for this is more difficult. > > It would have to be aware of the need to delete 2 subtypes and > > create 1; or to delete one and create 2. Obviously, the > > architecture needs to handle this; but thats no problem because > > our's does :-). > > This is the part that makes me queasy, though I have no concrete > refutation other than the semantic equivalence of the objects. > How did the two subtypes become active at the same time? I would > think that the port can only being doing one thing at a time > (albeit maybe on half a clock cycle). The two subtypes were active because There are two supertypes; and they each used an unshared subtype. In this mode of operation, data can be strobed into and out of the parallel port indenpendently: possibly simultaneously (remember, there's a data-direction register which assigns a direction to each pin independently). It would be possibly to model the situation by adding a 3rd subtype to each of the two supertypes. Unfortunately, the resulting two objects would be tightly dependent on each other, leading to increased complexity and maintenance problems. > Well I have never seen a case where one needed to migrate a subtype that > was also a supertype. I belong to the school that holds that only leaf > subtypes should be migrated. The problem with migrating intermediate > objects is that this limits the implementation to architectures that > actually instantiate supertypes. Even when you only migrate the leaves, inner nodes may still be effected. If you have a subtype hierchy thats three objects deep and want to migrate from A->AA->AAA to A->AB->ABA then although you are migrating leaf objects (AAA to ABA), there is also an inner migration (AA to AB). If you don't need to do this, then your architecture doesn't need to support it. However, an operator that is defined as part of the method should not exclude this. Dave. Not speaking for Mitel Semiconductor. -- Dave Whipp, Embedded Systems Group, Mitel Semiconductor, Plymouth, United Kingdom. PL6 7BQ. tel. +44 (0)1752 693277 mailto:david_whipp@mitel.com fax. +44 (0)1752 693306 http://www.gpsemi.com Subject: Re: (SMU) Migrating Subtypes - Rather not lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Frankel... > > I do not see an inconsistency. Creating and deleting all supertypes is just a > > mechanism for maintaining consistency. The real requirement is that the application > > be consistent at the end of the action. If you are just deleting a subtype, then you > > would, indeed, have to delete all the supertypes before exiting the action to > > maintain consistency. However, for subtype migration all you have to ensure is that > > the complete hierarchy is intact at the end of the action -- which it will be so long > > as the subtype create and delete are both done in that action. In fact, during > > migration the supertype hierarchy is completely untouched. > > Now you are making up rules. Sure that all sounds like a wonderful set > of rules. > Where did they come from? Well, I would start with the paragraph you quoted to Hogan. I believe the first sentence, "The action must leave subtypes and supertypes consistently populated." is the real requirement that I cited. (In fact, I would argue that the whole section starting on pg. 45 makes this eminently clear.) As it happened the section was just talking about _permanent_ creates and deletes, not migration. However, for migration the same principle holds true -- the hierarchy must be consistent at the end of the action after the migration. > The analyst is responsible for > > ensuring that when a subtype is migrated it is placed in state in the new state > > machine that is appropriate for subsequent processing. It seems to me that the > > ability to take advantage of this is the core reason for wanting to do subtype > > migration. > > More wonderful rules. That was the point of OOA96's support for specifying the state of a created active object. Clearly for subtype migration the analyst must be able to do this to be able to have proper interaction with other onbject's state machines. > Assuming your migrating subtypes are under an associative Connection > Test object, as > opposed to the Tester Pin itself, its role migration seems like a good > example. > However it makes a better case for our technique, because your Tester > Pin is > bouncing back and forth between test subtypes like a ping pong ball. > That is > alot of subtype creation and deletion integrity to deal with. We would > just > create all Connection Test subtype instances necessary for this test run > on a > particular Tester Pin, and use their dynamics as necessary, *including > simulataneously*, > when required. I lost the thread of this. There is no relevant M:M relationship needing an associative object. When another, quite independent object makes a connection, the action that does that invokes a synchronous service to perform the migration. There is only one Tester Pin and I see no need for other objects that are not Tester Pins. > > If you want to instantiate all possibilities, how would you capture the fact in the > > above example that the alternate possibilites cannot coexist at the same moment in > > time? Instantiating the alternatives strikes me as highly misleading about what is > > actually happening -- it implies coexistence of different real world entities. > > We say they can coexist, just that you choose not to use them at the > same time. > There are other examples, such as the Employee, where roles not only > coexist, but > execute concurrently and simultaneously in the real world. Your scheme > does > not account for that possibility. If the roles cannot coexist, then I think it is misleading to create them. There is enough difficulty in handling different views of time in an OOA without having instances that should not exist around. As to the second point -- that roles can coexist -- I am not sure that I can agree with that. Even in a meeting where an Employee's Manager and the Employee's Peons are present, that Employee doesn't simultaneously play both roles. If the Employee gives a report the Manager, that is one role. If the Employee then turns around and delegates some tasks to the Peons, that is another role. But the Employee doesn't report and delegate at the same time. Even if one could come up with some arcane situation that could be interpreted as simultaneously enacting dual roles, I think it would not be very relevant. S-M provides an abstract description mechanism. As in any abstraction there is some sacrifice of detail for the sake of simplification and generality. The methodology's abstraction for role playing seems to assume that only one role can be enacted at a time. Intuitively, this seems reasonable to me and I don't see any convincing evidence that that abstraction cannot be made to work to describe the real world. The methodology is merely saying that if such a situation arose, one could decompose the situation into sequences of roles. > I think that is a limiting perspective. My experience is that it is > less misleading, > to the point of being explicitly direct, because: > > a) It allows for multiple roles to played simultaneously, which is much > more > consistent with what happens in the real world than assuming roles are > always mutually exclusive, while at the same time providing for > exclusivity > when necessary through dormant states in "non-active" subtypes, and As I indicated above, I believe this is extremely rare at best. > b) It addresses the issue of a requirement in the model that is > missing: That > certain real world instances have a limited potential. It is not the > case that > every instance can always migrate to every role. By establishing the > potential > at creation time, we have another tool for bounding dynamic execution > and moving > more testing to the analysis phase. I think this is a valid point. There is no explicit expression of such limitations and they could be interesting for a given problem. But there are two levels here. First is the case where we have subtypes subA, subB and subC. One could conceive of a situation where if the initial role was subA, then one could _only_ migrate to subB but not subC, and so on. I would argue that this can be handled explicitly by relationships between the subtypes. The second case occurs when some particular instance that is playing, say, subA can never play a subC role but an entirely different instance that happened to be playing subA could play subC role. My inclination would be to argue that something explicit about the first instance has not been modeled. Whatever prevents certain migrations is probably a characteristic of the instance object or of the relationships between it and other objects and the phenomenon could be documented through that feature (e.g., attribute description). But I would argue that you have the same problem with your approach. The OOA does not explicitly indicate which of the relevant potential instances will actually be instantiated in a particular situation. I don't see this a substantively different than not indicating which subtypes will actually be instantiated in a given situation. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: (SMU) Separation of Function and Sequence Steve Mellor writes to shlaer-mellor-users: -------------------------------------------------------------------- This note, and a number of following notes, address a single topic raised about the language. At 08:02 AM 3/13/98 +0000, Dave Whipp wrote: >-------------------------------------------------------------------- [snip] >SMALL does not completely separate the function and sequence. >Accessors define their filters. It may be better to name a >test process within the accessor instead of expressing the >computation directly. Also, the space in the language that >is taken by the filter expression could possibly be used for >assignment of intermediate navigation outputs: > > A(one) -> [R1] B(all, >myB) -> [R2] C(all) > myC The philosphy behind the language _does_ separate function and sequence (if we mean the same thing by those terms :). Accessors do indeed define their filters (as in A (one, x > ~y ) ... ), but these filters are not necessarily functions/computations. In my example above, the architecture can maintain one list for those instances of A where x > ~y, and another for x <= ~y. As we have discussed earlier, there are good reasons to be suspicious of intermediate outputs in any case. I have the feeling I've missed the boat here, but the topic is an important one so I plunged in anyway.... -- steve mellor Subject: Re: (SMU) Contexts and Naming Steve Mellor writes to shlaer-mellor-users: -------------------------------------------------------------------- At 10:20 AM 3/25/98 -0600, "Lynch, Chris D. SDX" LYNCHCD@HPD.Abbott.com>wrote: >-------------------------------------------------------------------- >>Steve Mellor writes to shlaer-mellor-users: >>-------------------------------------------------------------------- >> >>There are two issues here, one of which depends on the other. >> >>The first issue is the question as to whether .. ->[Rx->Ry->Rz] is >>truly the same as ... ->[Rx->Ry] >temp; temp -> [Rz]. IMO, they are >NOT. >> >>Consider four objects A, B, C, D connected by four relationships R1-4, >>where R4 is the composition of the other three. Let's say they're >>organized as: A [R1] B [R2] C [R3] D [R4=R1+R2+R3] A. >> >>Starting from an instance of A, we can traverse to instances of C. >>via R4 and R3 (ie A [R4->R3] C). We store the result in temp, >>then we can go from the temporary to B via R2. >> >>The implementation of the first relationship traversal A to C >>is actually going to be A [R1->R2] because of the composition. >>The second relationship traversal (from C to B via R2) is fine. >> >>Had we instead formalized this relationship as a single traversal, >>then the traversal A [R4->R3->R2] B would have been replaced >>by the translator, to a simple traversal of R1. >> >>Clearly the results are the same whichever way you represent it, >>but the fact that A [R4->R3->R2] B is the same as A [R1] is lost. >> >>The second issue is whether it's OK to access the data of an object >>'along the way'. For example, A -> [R4].access_of_D -> [R3->R2] B. >>Following the logic above, we never even 'visit' D, so what could >>the statement mean???? >> >>I believe it's possible to define a reasonable semantic for this kind >>of statement--after all, we do seem to have an understanding of its >>meaning--but to my mind, this violates the underlying meta-model >>of the language required for translation. >> >>-- steve mellor >^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ >On the first issue, it appears that it is only the composed relationship >which invalidates the use of the temporary. Is this true? Hmmm. I should get out more.... Yes. You're right. But I was not at all clear. Here's my problem: If we have a composed relationship, and we traverse the relationship set in one direction (across the composition), and traverse another relationship set that yields _the same set of instances_, __which is a requirement of the composition_, then (pause for breath) it doesn't matter which way around the relationship set we go. Therefore, the OIM of this part of the processing must think of these two relationship sets as being the same--at some level. Therefore, accessing data 'along the way' is, at least, suspect. To make this concrete, consider the well-known University composition example. In this case, we have a Department with many Professors, and each Professor advises some number of students. Finally, the students belong to a Department _which must be same_ as the one that the Professor works for. That's the composed relationship: Traversing from Student to Department along the composition must yield the same Department as traversing through the Professor. This is a trivial assertion, because that's the meaning of composition. And note the equally trivial comment that if the relationships existed without any composition, then there we wouldn't be able to substitute the relationship sets because a Student could major in a Department and be advised by a Professor from another. Now in the case that we have four objects, as in my initial example, we can substitute the full traversal for the composition, but then the traversal never 'visits' that intermediate object--making the whole idea of doing so in _any_ case quite invalid. Separately, as Neil Lang pointed out as he forced me to be clearer :), using a temporary variable to hold partial results can't yield different results, but it will make it harder for the architecture to do the 'right thing'. This is analogous to using temporary variables in an arithmetic expression when they aren't necessary: x := a + b; y := c + d; z := x + y; when what you want is: z := a + b + c + d; The compiler will very likely generate less efficient/small code in the former case. >On the second issue, it seems that your argument is that we don't visit >D because having to visit D would prevent the substitution of another >relationship (i.e., an optimization.) I think such an argument puts the >cart before the horse. Shouldn't we be deciding whether the "grab data >as you traverse" semantic gives a desirable expressive power to the >models or, conversely, too much rope to get tangled in? I agree in principle of course, but in this case, the 'optimization' is inherent in the meaning of the relationship. That's why the example had composition in it. The bottom line is that relationship set traversals should be atomic. -- steve mellor Subject: (SMU) Relationship Navigation Steve Mellor writes to shlaer-mellor-users: -------------------------------------------------------------------- At 08:02 AM 3/13/98 +0000, Dave Whipp wrote: >I am not sure why the "->" is needed for relationship navigation. >If the [R...] syntax it treated in the same way as a keyword >(e.g. gen, link, etc.) then the above navigation could be >expressed as: > > A(one) | [R1] B(all) > myB | [R2] C(all) > myC > >I am not whether the "> myB" bit belongs within brackets. Its not >clear how to extract a combination of references and dataflows >using this notation. You're right. It ( -> ) isn't necessary. In the context of the UML, however, we'll have to change the notation. The UML doesn't distinguish between attributes and relationships in the same way as S-M: it treats access to them in the same way. Hence the Object Constraint Language (OCL) [approved as a part of the UML] allows statements like: Company ------- Self.Employee.Age // employee is the role taken on by the Person which in SMALL would be: Self -> [R1.'Employs'] Person.Age Because the language (SMALL') must be compliant with the UML, we will have to move closer to the former. Remember: the goal is an executable UML that can be translated using RD. -- steve mellor Subject: RE: (SMU) Polymorphic events vs. "simultaneous interpretation of "Lynch, Chris D. SDX" writes to shlaer-mellor-users: -------------------------------------------------------------------- > David Stone writes to > shlaer-mellor-users: > -------------------------------------------------------------------- >(1) Although it is clear in the method that there must in general >always be precisely one subtype instance for each supertype instance, >the method does not have any "multiple creation accessor" which would >create all the instances at once. Thus there is bound to be a time >during which a supertype instance exists without a subtype, or >vice-versa. My question is: what happens to polymorphic events at >those times (and analogous times during deletion and migration)? >Carolyn Duby, and I think Neil Lang, were saying that it is a run-time >error; this seems quite harsh to me. It can be difficult for the >analyst to keep track of these situations, especially when following >the concurrent interpretation of time. This really is another thread entirely: problems with the so-called "simultaneous interpretation of time". Last we left this, Ms. Shlaer conceded that the method does not completely address the problems associated with actions proceeding in parallel. I believe she said she would be working on it. Suffice it to say that for now you have a software architecture problem: ensuring that all the correct members of the type-hierarchy are in place when events are received by any state machine in the object's hierarchy. For me this means that migration should be accomplished in one action and that the migration must be atomic with respect to that object's neighbors in the OCM and OAM. (This could be described as the PT rule, "actions must leave the instance consistent", considering that an object in a hierarchy of types is conceptually one instance.) As was stated in the "simultaneity" thread, the preferred method for accomplishing this under the simultaneous interpretation of time is to make special-case architectural mappings (i.e., colorization) to specify the *action* as the unit of interleaving for the subtyped object and those that depend on its internal consistency. In other words, in the locality of that object, the interpretation of time is "interleaved" rather than "simultaneous". Chris Lynch Abbott AIS San Diego, CA Subject: RE: (SMU) Polymorphic events "Vock, Michael" writes to shlaer-mellor-users: -------------------------------------------------------------------- Response to 1) from david.stone@cambridge.simoco.com: > 1) Although it is clear in the method that there must in general > always be precisely one subtype instance for each supertype instance, > the method does not have any "multiple creation accessor" which would > create all the instances at once. If you are creating the hierarchy within one state - synchronous creates (i.e. Create or New or whatever) as opposed to asynchronous (i.e. events) creates - the method's rules associated with data/relationship integrity hold. All data and relationships operated on within the current state action MAY become inconsistent sometime during the execution of a state (i.e. creating a subtype in one action statement and then creating the supertype in the next action statement). But, when the execution of the state completes, the system must be in a consistent condition (i.e. if I have a supertype instance, there had better be a subtype instance corresponding to the supertype). You could conceivably generate events to subtypes and supertypes to create the hierarchy asynchronously, because you have initiated the "conformance" process (see p. 106 of "Lifecycles" - Rules about Consistent Data), but why do that to yourself and your Architects. Correctly or incorrectly, I view an inheritance relationship in SM OOA as a specialized binary relationship. If I have an instance of a supertype I must have an instance of a subtype. This is the same rule that I have when I model a non-conditional binary relationship, c DOG <<-----R1------>PERSON owns is owned by (OK, I know a DOG can be existent without an owner and a DOG could be owned by multiple people, just humor me) With the above relationship, if I have an instance of a DOG, it had better have an instance of a PERSON that owns it. If after the execution of a state, I have an unowned DOG, then my analysis has fundamental problems (disregarding the possibility that an event is generated to an instance of PERSON like "P1: Hey, You Own This DOG".) Again, same rule associated with inheritance. One instance cannot exist, within the realm of my analysis, without the other. > Thus there is bound to be a time > during which a supertype instance exists without a subtype, or > vice-versa. My question is: what happens to polymorphic events at > those times (and analogous times during deletion and migration)? Never across state actions, unless your analysis is faulty or you are taking the dangerous approach of generating creation events to subtypes and supertypes separately. Mike Vock SRA International > Subject: Re: (SMU) Migrating Subtypes - Rather not Mike Frankel writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > > lahman writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Responding to Frankel... > > Well, I would start with the paragraph you quoted to Hogan. I believe the first sentence, > "The action must leave subtypes and supertypes consistently populated." is the real > requirement that I cited. (In fact, I would argue that the whole section starting on pg. > 45 makes this eminently clear.) As it happened the section was just talking about > _permanent_ creates and deletes, not migration. Why do you make that assumption? Just because it might seem ridiculous to you now that it could have been referring to migration, based on your chosen scheme for modeling it, doesn't mean that was the intention of the paragraph. I would not presume to make statements about the intention of the authors. I am only going by what is actually written in the book, whether it is internally consistent, and whether it has held up against experience and alternative techniques over 7 years of practice. However, for migration the same principle > holds true -- the hierarchy must be consistent at the end of the action after the > migration. Being consistent at the end does not place any stipulations or rules on consistency in between. > > That was the point of OOA96's support for specifying the state of a created active object. > Clearly for subtype migration the analyst must be able to do this to be able to have proper > interaction with other onbject's state machines. This new rule/clarification was for support of proper *synchronous* creation of active objects in general. There is no migration law that says that one can't use a creation event on the new subtype state machine to create the instance in the OOA, and color the event to be implemented synchronously when self-direction is detected. > > > Assuming your migrating subtypes are under an associative Connection > > Test object, as > > opposed to the Tester Pin itself, its role migration seems like a good > > example. > > However it makes a better case for our technique, because your Tester > > Pin is > > bouncing back and forth between test subtypes like a ping pong ball. > > That is > > alot of subtype creation and deletion integrity to deal with. We would > > just > > create all Connection Test subtype instances necessary for this test run > > on a > > particular Tester Pin, and use their dynamics as necessary, *including > > simulataneously*, > > when required. > > I lost the thread of this. There is no relevant M:M relationship needing an associative > object. M:M relationship is not the only reason for creating an associative object. Data describing the relationship and relationship lifecycles are two more. Since the Tester Pin's behavior is dependent on the kind of Test relationship it is in, that behavior should be allocated according to its dependency in the model. (that is another thread). > As to the second point -- that roles can coexist -- I am not sure that I can agree with > that. Even in a meeting where an Employee's Manager and the Employee's Peons are present, > that Employee doesn't simultaneously play both roles. If the Employee gives a report the > Manager, that is one role. If the Employee then turns around and delegates some tasks to > the Peons, that is another role. But the Employee doesn't report and delegate at the same > time. That is only a real-world implementation constraint (one brain, one processor). The domain defines naturally concurrent activities in the analysis. And who's to say part of the OOA behavior defined by a Peon state model isn't being performed simultaneously by a code translator (another implementation processor) while the Employee is busy managing in the meeting at the same time? > > Even if one could come up with some arcane situation that could be interpreted as > simultaneously enacting dual roles, I think it would not be very relevant. S-M provides an > abstract description mechanism. As in any abstraction there is some sacrifice of detail > for the sake of simplification and generality. The methodology's abstraction for role > playing seems to assume that only one role can be enacted at a time. Intuitively, this > seems reasonable to me and I don't see any convincing evidence that that abstraction cannot > be made to work to describe the real world. The methodology is merely saying that if such > a situation arose, one could decompose the situation into sequences of roles. The SM method is based on modeling natural concurrency when it exists, and sequence only when required. Why would you want to change that philosophy by taking something that was naturally concurrent (given that it actually was) and make it sequential? Definitely not the direction I hope the method evolves. > > I think this is a valid point. There is no explicit expression of such limitations and > they could be interesting for a given problem. But there are two levels here. First is > the case where we have subtypes subA, subB and subC. One could conceive of a situation > where if the initial role was subA, then one could _only_ migrate to subB but not subC, and > so on. I would argue that this can be handled explicitly by relationships between the > subtypes. > > The second case occurs when some particular instance that is playing, say, subA can never > play a subC role but an entirely different instance that happened to be playing subA could > play subC role. My inclination would be to argue that something explicit about the first > instance has not been modeled. Whatever prevents certain migrations is probably a > characteristic of the instance object or of the relationships between it and other objects > and the phenomenon could be documented through that feature (e.g., attribute description). > > But I would argue that you have the same problem with your approach. The OOA does not > explicitly indicate which of the relevant potential instances will actually be instantiated > in a particular situation. I don't see this a substantively different than not indicating > which subtypes will actually be instantiated in a given situation. It is true that in some cases, attribute values of an object could determine whether migration is possible at that moment, or which subtype is the next logical choice. However, the level of constraint that I am talking about limits potential, as does relationship multplicity, as opposed to rules for choosing during execution of actions. Your comment about having special subtype relationshps which designate "permissible migration paths" are on the right track, though that introduces new subtypes based on their "next subtype" to migrate to, unless you allow these relationships to be conditional, in which case the potential is still instance-based and still not specified explicitly. Just as one "knows" at the point of instantiation that a Vehicle is either a Boat or a Car, I submit that one "knows" which instances have potential to migrate and which don't, just as when one does migration in general, you "know" which subtype to start in. This does not require special relationships - you just create that kind of instance. That is how we handle potential for migration (not the permissible migration path, which is an interesting issue unto itself). We just create the potential subtypes for that instance, because we "know". As far as permissible migration path is concerned: Given supertype A, and subtypes B, C, and D, how do we specify that it is OK to migrate from A to B to C, but not directly to D? That is definitely a constraint not currently specified in either the OIM or STD notation. And it is definitely a broad constraint, like multipliciy, which should be visible somewhere else besides action language. -- ----------------------------------------------- Mike Frankel Director of Software Engineering Esprit Systems Consulting, Inc. 610-436-8290 fax 610-436-9848 mfrankel@EspritInc.com http://www.EspritInc.com -->Domain Engineering For Reuse -->Vital Link Team Consulting -->BASELINE Domain Models -->Object Methods Training and Consulting -->Structured Methods Training and Consulting "Strategies for Computer and Human Systems" ----------------------------------------------- Subject: Re: (SMU) Migrating Subtypes - Rather not Mike Frankel writes to shlaer-mellor-users: -------------------------------------------------------------------- Mike Frankel wrote: > > As far as permissible migration path is concerned: Given supertype A, > and subtypes > B, C, and D, how do we specify that it is OK to migrate from A to B to > C, but > not directly to D? That is definitely a constraint not currently > specified > in either the OIM or STD notation. And it is definitely a broad > constraint, like > multipliciy, which should be visible somewhere else besides action > language. I am not holding a debate with myself, this is a correction. I meant to type: Given supertype A, and subtypes B, C, and D, how do we specify that it is OK to migrate from B to C to D, but not directly from B to D? That's what I get for chewing my food too fast. > > -- > ----------------------------------------------- > Mike Frankel > Director of Software Engineering > Esprit Systems Consulting, Inc. > 610-436-8290 fax 610-436-9848 > mfrankel@EspritInc.com > http://www.EspritInc.com > > -->Domain Engineering For Reuse > > -->Vital Link Team Consulting > > -->BASELINE Domain Models > > -->Object Methods Training and Consulting > > -->Structured Methods Training and Consulting > > "Strategies for Computer and Human Systems" > ----------------------------------------------- -- ----------------------------------------------- Mike Frankel Director of Software Engineering Esprit Systems Consulting, Inc. 610-436-8290 fax 610-436-9848 mfrankel@EspritInc.com http://www.EspritInc.com -->Domain Engineering For Reuse -->Vital Link Team Consulting -->BASELINE Domain Models -->Object Methods Training and Consulting -->Structured Methods Training and Consulting "Strategies for Computer and Human Systems" ----------------------------------------------- Subject: Re: (SMU) Polymorphic events lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Stone... > (1) Although it is clear in the method that there must in general > always be precisely one subtype instance for each supertype instance, > the method does not have any "multiple creation accessor" which would > create all the instances at once. Thus there is bound to be a time > during which a supertype instance exists without a subtype, or > vice-versa. My question is: what happens to polymorphic events at > those times (and analogous times during deletion and migration)? You raise two interesting points. First, when does the polymorphic readdressing take place. I think if an event to the supertype is readdressed upon entry into the queue, there could be a problem because by the time it is processed some action may have migrated the subtype so that has a different address. This, I submit, is a purely architectural problem. The subtype identifier is still the same (i.e., by definition it is the same as the supertype) so this is a matter of handling pointers and whatnot, which gets easier if you don't actually delete. Though odds are that it would be simpler to de-reference the event when is was executed from the queue. The second point relates to the simultaneous view of time. If an action of some other object, A1, is doing the migration synchronously, it could be possible for an event to the supertype, B, to be processed while this is going on. As you point out, if that event is processed between the delete and the create being done in A1's action there is a Problem. However, I think this is also an architectural issue in that if one is going to support the simultaneous view, some sort of locking mechanisms will have to be supported. If so, then the architecture would simply lock access to the B instance when the A1 actions starts. This would defer processing of the polymorphic event until the system was again consistent at the end of the A1 action. Thus I think the answer to your question is that nothing unusual happens to the polymorphic events, provided you have the appropriate architecture. > Carolyn Duby, and I think Neil Lang, were saying that it is a run-time > error; this seems quite harsh to me. It can be difficult for the > analyst to keep track of these situations, especially when following > the concurrent interpretation of time. The analyst, though, is still responsible for ensuring that the sundry subtypes can handle any polymorphic events that could conceivably be lying around on the queue. In a truly asynchronous system this might be a nontrivial task that requires care in exactly where and when subtypes are migrated together with state machine adjustments. Though I have never seen a situation where this could not be done in the OOA, it is sometimes tempting to solve the problem via colorization of the translation. > (2) Concerning the priority of polymorphic events to self, we must > have some rule. We could say (as Neil seemed to be suggesting) that > such events are not permitted, though I think there are cases when you > wish to generate an event to an instance and you don't know (at > analysis time) whether it is a self-directed event. What if an > instance finds (by a selective read accessor) an instance of its > supertype object, and then wishes to send that instance a polymorphic > event? In such a case it complicates the analysis to have to check > whether it is in fact, by polymorphism, a self-directed event, and > send it monomorphically if so. I am not sure that Neil was saying that. I think I argued previously that since there is only one instance, the addressing is not relevant. An event generated in the subtype but sent to the supertype is still generated in and addressed to the same instance, so it must be regarded as self directed. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) Polymorphic events lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > > Wouldn't the need for this go away if you had a single supertype, > > Protocol, that had five subtypes that were migrated (the four > > strobe/continous vs. input/output combinations and the > > strobed-bidirectional)? Then Channel would be 1:1 with Protocol > > and the migration would be traditionally simple. > > When you combine state machines, you multiply the number of states. > Thus, if you assume that each of my "traditional" subtypes has 3 > states, then your suggestion would cause an increase of states in > each of the 4 substates from 3 to 9. There would also be a lot of > repetition of behaviour, because combined state machines require > the action to be repeated in each place where the state is used. I can see a whole lot of redundancy between subtype states (e.g., in effect the strobed/input protocol would be identical to the continous/input protocol). But I don't see where the new states in a particular FSM come from; if anything they have been simplified by decomposition of function. The redundancy can be handled by dumping it into a supertype FSM. > You may wish to examine Figure 2.6.3 on page 30 of "Object > Lifecyles". Try migrating any of the subtypes and note the > effect on the other relationships. Assume that the bank is > extreemely customer friendly and allows these migrations. OK, I see where you are coming from on this. It opens up a whole new arena for philosophical waxing. My quick instinctive answer is that I don't think there can be any subtype migration at all in that diagram (ignoring the issue that Accounts can coexist). If you wanted to move a 1->3->7 (Interest bearing savings account) to a 1->2->5 (regular checking account), you would have to do it the hard was by literally deleting the entire Account and then recreating it. That is, the delete would have to kill off the entire tree. In fact, you would probably create the new checking account, do a balance transfer to it from the old one, and then delete the old account -- maybe in separate actions. I do not see shifting intermediate elements of the hierarchy as subtype migration. I see subtype migration as a pure leaf operation under a single supertype. Put another way, subtype migration involves changing roles for an entity that exists continuously across roles. Switching an Account as described in 2.6.3 does not simply change the role that an Account plays -- it changes the nature of the Account. I submit that the nature of the instance is reflected in the supertype hierarchy, its parent/sibling relationships, and the contained data so that if you change that you are changing the entity itself, not just the roles that it plays. If the hierarchy is kept the same, then I have little difficulty rationalizing that the migrated subtype is really the same critter I had in hand prior to the migration, which gets back to my previously stated notion that one should think of the supertype as the entity and the subtypes as roles that it plays. (This would be even firmer ground if it had the same data attributes, but I think this is a small sacrifice to pay for the value of supporting roles.) As soon as the hierarchy changes, it seems to me that the thing I have now is very much a different object than the thing I had before. And if it is really is a different object, then it shouldn't be migrating. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) Contexts and Naming lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Mellor... > If we have a composed relationship, and we traverse the relationship > set in one direction (across the composition), and traverse another > relationship set that yields _the same set of instances_, > __which is a requirement of the composition_, then (pause for breath) > it doesn't matter which way around the relationship set we go. > > Therefore, the OIM of this part of the processing must think of > these two relationship sets as being the same--at some level. > > Therefore, accessing data 'along the way' is, at least, suspect. I do not see that this conclusion follows. All this says to me is that the translator has some freedom to optimize. Though the analyst specified one navigation, the translation can now use another navigation if that will be more efficient. I see this as analogous to the use of an array subscript within a loop. The compiler should be free to eschew the indexing for simply incrementing an address (provided the compiler's debugger still shows an index value when the subscript is examined). If the relationship traversals are, indeed, equivalent, then the analyst's specification becomes an intent with a suggestion for the choice of alternatives. If the analyst explicitly requests an intermediate value the suggestion becomes stronger, but it is still not a mandate -- so long as the translation obtains the intermediate data in a manner that can be demonstrated as consistent from the traversals being equivalent. If I am at A and I want both a C reference and a B reference AND the relationship loop is defined to be traversed either way, then I do not see why it would matter which set of traversals the translation actually used. By definition I can get to the same C both ways and I can get to the same B both ways and in each pair one of the traversals involves B [R2] C or C [R2] B. Therefore I see no difference between Cref = A ->R1->R2->C Bref = A ->R4->R3->R2->B and Cref = A ->R1.Bref -> R2->C What am I missing here? > To make this concrete, consider the well-known University composition > example. In this case, we have a Department with many Professors, > and each Professor advises some number of students. Finally, > the students belong to a Department _which must be same_ as the one > that the Professor works for. That's the composed relationship: > Traversing from Student to Department along the composition > must yield the same Department as traversing through the Professor. > > This is a trivial assertion, because that's the meaning of composition. > And note the equally trivial comment that if the relationships existed > without any composition, then there we wouldn't be able to substitute > the relationship sets because a Student could major in a Department > and be advised by a Professor from another. > > Now in the case that we have four objects, as in my initial example, > we can substitute the full traversal for the composition, but then > the traversal never 'visits' that intermediate object--making the > whole idea of doing so in _any_ case quite invalid. OK, I am confused again. The four objects are: Professor, Department, Student and...? Never one to let confusion stand in the way of obfuscation, I still don't see a problem. As I recall the issue arose around wanting to be able to access references for instances that that were intermediate points of a relationship traversal. If the OOA traversal does not traverse that intermediate instance why would one expect to obtain the reference from that traversal? However, if the OOA traversal does traverse that intermediate instance, then it is up to the architecture to provide the right instances. This essentially means that the translation cannot take alternative paths from that specified in the OOA unless it is demonstrable that they are equivalent. Lacking a composed relationship or other clues, as in this case, it seems to me the translation must be quite literal minded about using the traversal the analyst specified. If it is literal minded, how can it it return an incorrect reference for any point on the requested traversal? > Separately, as Neil Lang pointed out as he forced me to be > clearer :), using a temporary variable to hold partial results can't > yield different results, but it will make it harder for the architecture > to do the 'right thing'. This is analogous to using temporary variables > in an arithmetic expression when they aren't necessary: > x := a + b; > y := c + d; > z := x + y; > when what you want is: > z := a + b + c + d; > The compiler will very likely generate less efficient/small code > in the former case. Unfortunately this is not a good example. Any mature optimizing compiler would probably generate identical code for both these cases (assuming x and y were not referenced again in scope). The problem lies in a larger context when compiler optimization is not up to par. In that case the first example is more likely to generate better code than the single statement because of common expressions in the multi-statement context in which the fragment is embedded (i.e., when x and y are used elsewhere). Back in the Olden Days all Real Programmers always wrote FORTRAN programs like the first example because the compilers weren't very bright about common expression reduction. By the early '70s life was much better and we could achieve the same efficiency with z = (a+b) + (c+d); Since optimization is not yet a hallmark of code generators, I think the analogy implies that simple statements might well make it easier for the translator in the full context. B-) -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) Migrating Subtypes - Rather not lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Frankel... > > Well, I would start with the paragraph you quoted to Hogan. I believe the first sentence, > > "The action must leave subtypes and supertypes consistently populated." is the real > > requirement that I cited. (In fact, I would argue that the whole section starting on pg. > > 45 makes this eminently clear.) As it happened the section was just talking about > > _permanent_ creates and deletes, not migration. > > Why do you make that assumption? Just because it might seem ridiculous > to you now that it could have been referring to migration, based on your > chosen scheme for modeling it, doesn't mean that was the intention of > the paragraph. I would not presume to make statements about the > intention > of the authors. I am only going by what is actually written in the > book, > whether it is internally consistent, and whether it has held up against > experience and alternative techniques over 7 years of practice. Then I guess we read the book differently. Migration is discussed in a separate section on pages 57-58. > However, for migration the same principle > > holds true -- the hierarchy must be consistent at the end of the action after the > > migration. > > Being consistent at the end does not place any stipulations or rules on > consistency in between. That is correct, the methodology does not require consistency at all times within an action. > > That was the point of OOA96's support for specifying the state of a created active object. > > Clearly for subtype migration the analyst must be able to do this to be able to have proper > > interaction with other onbject's state machines. > > This new rule/clarification was for support of proper *synchronous* > creation > of active objects in general. There is no migration law that says that > one > can't use a creation event on the new subtype state machine to create > the instance in the OOA, and color the event to be implemented > synchronously > when self-direction is detected. I don't think the issue is about events. It is about defining the correct state for the migrated instance when it is created. In some cases the state after migration is necessarily different than the default state for creation. Without the OOA96 update it was not possible to deal with this problem. > M:M relationship is not the only reason for creating an associative > object. > Data describing the relationship and relationship lifecycles are two > more. Since the Tester Pin's behavior is dependent on the kind of > Test relationship it is in, that behavior should be allocated according > to its dependency in the model. (that is another thread). I agree associative objects can be used for relationships other than M:M -- when appropriate. However, I see no reason for one in this case since migration describes the situation succinctly; that would be more IM clutter than clarification. > That is only a real-world implementation constraint (one brain, one > processor). > The domain defines naturally concurrent activities in the analysis. And > who's > to say part of the OOA behavior defined by a Peon state model isn't > being > performed simultaneously by a code translator (another implementation > processor) > while the Employee is busy managing in the meeting at the same time? But it is supposed to be the real world we are modeling. If the real world rarely, at best, uses simultaneous roles why would the OOA need to support this? > The SM method is based on modeling natural concurrency when it exists, > and sequence > only when required. Why would you want to change that philosophy by > taking something > that was naturally concurrent (given that it actually was) and make it > sequential? > Definitely not the direction I hope the method evolves. I would agree that this would not be a good thing -- provided roles really were 'naturally' concurrent in the real world. I see no convincing evidence that they are and I see a busload of anecdotal examples where they aren't. > Just as one "knows" at the point of instantiation that a Vehicle is > either a Boat or > a Car, I submit that one "knows" which instances have potential to > migrate and > which don't, just as when one does migration in general, you "know" > which subtype to > start in. This does not require special relationships - you just create > that kind > of instance. That is how we handle potential for migration (not the > permissible > migration path, which is an interesting issue unto itself). We just > create the > potential subtypes for that instance, because we "know". I disagree that one would always know what potential subtypes would be needed for a particular instance. In the real application of my Tester Pin case the connection that determines the subtype role is defined via a domain bridge. At the time of instantiation the domain has no way of knowing which specific connections will be required during the upcoming testing. > As far as permissible migration path is concerned: Given supertype A, > and subtypes > B, C, and D, how do we specify that it is OK to migrate from A to B to > C, but > not directly to D? That is definitely a constraint not currently > specified > in either the OIM or STD notation. And it is definitely a broad > constraint, like > multipliciy, which should be visible somewhere else besides action > language. I agree that this is important information and one should find a way to capture it. One way would be to use relationships to define the valid paths (e.g., "is followed by" or "can reconnect to"). But still I think you have the same problem if you instantiate your candidate type instances. I will use whatever mechanism you use to define the order that they will be activated. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) Polymorphic events Neil Lang writes to shlaer-mellor-users: -------------------------------------------------------------------- David Stone wrote: > David Stone writes to > shlaer-mellor-users: > -------------------------------------------------------------------- > > I am pleased to receive such a rapid response to my previous posting > about this, but from the replies it is obvious that I didn't explain > myself clearly. I was trying not to prejudice the issues, but in > doing so was too vague. Here's an attempt to clarify them. > > (1) Although it is clear in the method that there must in general > always be precisely one subtype instance for each supertype instance, > the method does not have any "multiple creation accessor" which would > create all the instances at once. Thus there is bound to be a time > during which a supertype instance exists without a subtype, or > vice-versa. My question is: what happens to polymorphic events at > those times (and analogous times during deletion and migration)? I believe (as has been noted in subsequent postings by Chris Lynch and Mike Volk) that this issue is not specific to subtype migration, but is much more general; namely the fact that it can take multiple processes (within a single action or across multiple actions) to restore consistancy in an OOA model. How does one deal with this in the simultaneous interpretation of time? Whose responsibility is it to manage this(the analyst or architect) This issue must be discussed and resolved but let's not try to solve it at the same time we're focussing on polymorphic events. > > > Carolyn Duby, and I think Neil Lang, were saying that it is a run-time > > error; this seems quite harsh to me. It can be difficult for the > analyst to keep track of these situations, especially when following > the concurrent interpretation of time. > > (2) Concerning the priority of polymorphic events to self, we must > have some rule. We could say (as Neil seemed to be suggesting) that > such events are not permitted, though I think there are cases when you > > wish to generate an event to an instance and you don't know (at > analysis time) whether it is a self-directed event. What if an > instance finds (by a selective read accessor) an instance of its > supertype object, and then wishes to send that instance a polymorphic > event? In such a case it complicates the analysis to have to check > whether it is in fact, by polymorphism, a self-directed event, and > send it monomorphically if so. I recall a posting by John Yeager some months ago in which he differentiated between an instance generating an event directly, deliberately, knowingly etc to itself, and an instance generating an event to a related instance that just happens to be itself. He suggested that the self-directed event rule should apply only to the first category of events to oneself. I think John is "right on" (to use a Steve Mellorism) and I would like to see further work on this the next time we extend the OOA methodology. > > > ((3) was just a request for the wording of OOA'96 to be clarified.) > > (4) Concerning loops in supertype/subtype "hierarchies", it is quite > true, as Neil says, that in most cases these cannot occur because of > the constraints on the numbers of instances. However, the degenerate > case, in which a supertype object has only one subtype, is permitted > (as far as I know) in the method, and so an architecture must do > something with such a case. We all seem to agree that such loops > should be banned: all that is needed is that the official > documentation of the method say so in future. If I felt particularly argumentative, I probably assert that the method doesn't allow the one subtype case (after all you use the subtype construct only when you want to model mutually exclusion). But regardless I don't see how that introduces the loops that you mentioned in your original posting. > > > > (5), (6), (7): These were all about the possible positions of active > objects in the supertype/subtype hierarchy, and what a complete > polymorphic event table was. I gathered that both Carolyn and Neil > were implying that each polymorphic event should be mapped at run-time > > to exactly one event. To state what I think they said more formally, > for each polymorphic event label consider all the paths in the tree, > from the object to which the polymorphic event is directed, to the > leaf nodes. For every such path there must be precisely one entry in > the polymorphic event table, which must map the polymorphic event > label to a plain event label, of an event directed to one of the > objects on that path. > > I think this is the definition of "complete". This is not what either > > Neil or Carolyn said exactly, but I think it's the generalized > expression of what they meant. That's a much more precise statement but it may need some modification to ensure that two or more paths to different leaf nodes passing through the same active interior node should have identical mappings. ( I think that's what I want to say) > > > I do notice that Neil uses "instance" to mean what I mean when I say > "all related instances in a super/subtype hierarchy from the root to > the leaf". This usage seems different from the established usage in > e.g. Modeling the World in States p.29. > > -- > David Stone > Sent by courtesy of, but not an official communication from: > Simoco Europe, P.O.Box 24, St Andrews Rd, CAMBRIDGE, CB4 1DP, UK -- Subject: Re: (SMU) Polymorphic events- (repost) Neil Lang writes to shlaer-mellor-users: -------------------------------------------------------------------- *********** This is a reposting of a message sent out earlier today that only some of you received. If you did receive the earlier posting, please replace it with this one. Thanks *********** David Stone wrote: > David Stone writes to > shlaer-mellor-users: > -------------------------------------------------------------------- > > I am pleased to receive such a rapid response to my previous posting > about this, but from the replies it is obvious that I didn't explain > myself clearly. I was trying not to prejudice the issues, but in > doing so was too vague. Here's an attempt to clarify them. > > (1) Although it is clear in the method that there must in general > always be precisely one subtype instance for each supertype instance, > the method does not have any "multiple creation accessor" which would > create all the instances at once. Thus there is bound to be a time > during which a supertype instance exists without a subtype, or > vice-versa. My question is: what happens to polymorphic events at > those times (and analogous times during deletion and migration)? I believe (as has been noted in subsequent postings by Chris Lynch and Mike Volk) that this issue is not specific to subtype migration, but is much more general; namely the fact that it can take multiple processes (within a single action or across multiple actions) to restore consistancy in an OOA model. How does one deal with this in the simultaneous interpretation of time? Whose responsibility is it to manage this(the analyst or architect) This issue must be discussed and resolved but let's not try to solve it at the same time we're focussing on polymorphic events. > > > Carolyn Duby, and I think Neil Lang, were saying that it is a run-time > > error; this seems quite harsh to me. It can be difficult for the > analyst to keep track of these situations, especially when following > the concurrent interpretation of time. > > (2) Concerning the priority of polymorphic events to self, we must > have some rule. We could say (as Neil seemed to be suggesting) that > such events are not permitted, though I think there are cases when you > > wish to generate an event to an instance and you don't know (at > analysis time) whether it is a self-directed event. What if an > instance finds (by a selective read accessor) an instance of its > supertype object, and then wishes to send that instance a polymorphic > event? In such a case it complicates the analysis to have to check > whether it is in fact, by polymorphism, a self-directed event, and > send it monomorphically if so. I recall a posting by John Yeager some months ago in which he differentiated between an instance generating an event directly, deliberately, knowingly etc to itself, and an instance generating an event to a related instance that just happens to be itself. He suggested that the self-directed event rule should apply only to the first category of events to oneself. I think John is "right on" (to use a Steve Mellorism) and I would like to see further work on this the next time we extend the OOA methodology. > > > ((3) was just a request for the wording of OOA'96 to be clarified.) > > (4) Concerning loops in supertype/subtype "hierarchies", it is quite > true, as Neil says, that in most cases these cannot occur because of > the constraints on the numbers of instances. However, the degenerate > case, in which a supertype object has only one subtype, is permitted > (as far as I know) in the method, and so an architecture must do > something with such a case. We all seem to agree that such loops > should be banned: all that is needed is that the official > documentation of the method say so in future. If I felt particularly argumentative, I probably assert that the method doesn't allow the one subtype case (after all you use the subtype construct only when you want to model mutually exclusion). But regardless I don't see how that introduces the loops that you mentioned in your original posting. > > > > (5), (6), (7): These were all about the possible positions of active > objects in the supertype/subtype hierarchy, and what a complete > polymorphic event table was. I gathered that both Carolyn and Neil > were implying that each polymorphic event should be mapped at run-time > > to exactly one event. To state what I think they said more formally, > for each polymorphic event label consider all the paths in the tree, > from the object to which the polymorphic event is directed, to the > leaf nodes. For every such path there must be precisely one entry in > the polymorphic event table, which must map the polymorphic event > label to a plain event label, of an event directed to one of the > objects on that path. > > I think this is the definition of "complete". This is not what either > > Neil or Carolyn said exactly, but I think it's the generalized > expression of what they meant. That's a much more precise statement but it may need some modification to ensure that two or more paths to different leaf nodes passing through the same active interior node should have identical mappings. ( I think that's what I want to say) > > > I do notice that Neil uses "instance" to mean what I mean when I say > "all related instances in a super/subtype hierarchy from the root to > the leaf". This usage seems different from the established usage in > e.g. Modeling the World in States p.29. > > -- > David Stone > Sent by courtesy of, but not an official communication from: > Simoco Europe, P.O.Box 24, St Andrews Rd, CAMBRIDGE, CB4 1DP, UK -- ---------------------------------------------------------------------- Neil Lang nlang@projtech.com Project Technology, Inc. 510-567-0255 x623 10940 Bigge Street San Leandro, CA 94577 http://www.projtech.com ---------------------------------------------------------------------- 'archive.9804' -- Subject: Re: (SMU) Polymorphic events baryh@airmail.net (Bary Hogan) writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Neil... > >We did some work a few years ago on coordinating the lifecycles >of migrating subtypes and quickly realized that it was not as easy >as we had expected. Based on the not-complete research then, >we observed (as you also noticed) that a combination of >asynchronous creation/synchronous deletion or asynchronous >deletion/synchronous creation seems to work best. Trying to >do both asynchronously is laden with problems. > >Which is why I personally favor raising the level of representation >of subtype migration at the analysis level to a single "migrate" >operation. It would eliminate of a lot of pro-forma process modeling >currently required to convey the same idea. In addition being a single, > >atomic (if I can use that word) process, it would be easier to >indicate that the architecture must implement the complete >migration in an uninterrupted fashion. > >I have not thought this idea out completely so it may be full of >holes, but I'd be interested in getting some feedback from you. > I think this is moving in the right direction. It would be nice if the mechanism allowed the analyst to have a delete state that is executed when migrating out of a subtype and also have a create action associated with the subtype being migrated to. As Dave Whipp pointed out, there may be multiple layers in the super/subtype structure that must be managed upon migration. It seems most natural for each subtype to handle deleting/creating the appropriate super/subtypes, and populating the instance data. Additionally, there may be other actions that must be performed at the time of migration that are best associated with one subtype or the other. [Using asynchronous creation/synchronous deletion or asynchronous deletion/synchronous creation _seriously_ limits your options here. Plus, we have migrating subtypes that were developed prior to OOA96 when this wasn't an option.] It should be as if the delete state of one subtype and the create state of the next subtype are executed and treated as one action. [If the state models were combined at the supertype level, then it would be one action.] Bary Hogan LMTAS Subject: (SMU) Domains Steve Mellor writes to shlaer-mellor-users: -------------------------------------------------------------------- Everyone: An issue that pops up with some frequency is: How do we come up with a good domain chart? Given that this is a cornerstone of the method, I have several questions for you: (1) Is this really an issue? A technical issue? An issue of uncertainty? An issue of presentation? (2) Assuming it's an issue, what exactly is the problem? Starting a domain chart? Knowing you have a good one? A need for a library of hints or patterns? Testing? (3) Assuming we can identify the issues, what would help? A paper describing "how to" come up with a domain chart? A conversation on e-SMUG? A 'guide' to domain charts? (4) Given all that, do you have any insights that really made you 'grok' separation of subject matters that you might want to share? A lot of questions, I know, but I've been sitting on this for a while, and you all have been quiet lately ;) Any ideas or comments would be appreciated. -- steve mellor Subject: Re: (SMU) Domains lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Mellor... > An issue that pops up with some frequency is: How do we > come up with a good domain chart? > > Given that this is a cornerstone of the method, I have > several questions for you: > (1) Is this really an issue? A technical issue? > An issue of uncertainty? An issue of presentation? All of the above. I think it is potentially a major issue. In a PT class back ca '90 the instructor said that if we spend more than half a day on the domain chart it is probably too much. My hindsight indicates that this is not true. Drawing a bunch of ellipses is not sufficient; one needs to get agreement on what the content and levels of abstraction are. For a complex application this is not possible without a lot of rumination, discussion, and several six-packs. > (2) Assuming it's an issue, what exactly is the problem? > Starting a domain chart? Knowing you have a good one? > A need for a library of hints or patterns? Testing? I think the core problem is the "subject matter" is not well defined. I have been doing S-M for years and I still have no clear idea of what one is and I might well not recognize it as such if I ran into one on the street. "Subject matter" is probably a good name for the concept -- it certainly sounds good -- but I bet that this thread will generate as many different definitions as there are contributors. Two secondary but nonetheless important issues are that there is a lot of vagueness about levels of abstraction for domains and how the client/service relationship works. I think that understanding the level of abstraction for each domain is crucial to both the modeling and minimizing the bridge work. Also, Peter Fontana pushes the idea that the domain chart should really be viewed as a description of how requirements flow through the application. I don't give it quite the importance that Peter does, but I think there is a lot of merit to that view. But it is only hinted at in those pages devoted to domain charts. The one thing that I don't think is much of an issue is testing. The rigorous domain boundaries provide excellent, natural interfaces for test harnesses. If one builds domain functionality incrementally, then use case-based simulation and testing works quite smoothly. > (3) Assuming we can identify the issues, what would help? > A paper describing "how to" come up with a domain chart? > A conversation on e-SMUG? A 'guide' to domain charts? Yes. There certainly needs to be more documentation of what domains are supposed to be and how they interact than presently exists. Because domain charts involve rather subtle issues, such as level of abstraction, I think more examples are needed for this than for, say, state machines. > (4) Given all that, do you have any insights that really > made you 'grok' separation of subject matters that you > might want to share? FWIW, currently the first thing we look for is large scale reuse. [We are biased in this because our product lines are designed to be plug and play -- to the point of running our software components on competitor's systems. We also need to share software across disparate products from different divisions with very different test philosophies.] We regard domains as being the primary vehicle for this since the bridge philosophy dictates the necessary interfaces. The recognized drawback of this view is that it is primarily a partitioning based upon functionality. The next thing we look at is the client/service relationships. We start to get worried when we see domains connected to almost all other domains or up-and-down traversals on the chart. This usually stems from having domains that are too complex and/or that are not properly abstracted. We tend to have a lot more domains than other practitioners, particularly lower level service domains and realized implementation domains. [Steve, I still remember the look on your face when I told you we had an application with 30 domains.] In part this is driven by the need to reuse components. For example, we have a Digital Diagnostics package that we would like to be able to use on a variety of testers. However, there are lots of ways to do digital diagnostics and the sources of the necessary information (circuit topology, fault dictionary, etc.) are disparate. So to be truly plug and play we need lots of modular domains so that we can effectively customize our package for specific environments. And digital diagnostics is only one thin slice of the overall system pie. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: (SMU) Domains and reuse lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Steve's query triggered my thinking about this because, by coincidence, last Tuesday I was at a presentation by Carma McClure (is that a great name, or what?) on component reuse and the upcoming IEEE process standard related to this. You may have noticed that all the gurus who were advocating object reuse three years ago are now on the component reuse bandwagon. Since object reuse failed impressively, the new silver bullet is component reuse. Supposedly this will work because there are going to be Standards. Those cited are things like CORBA, DCOM, UML, and IEEE's 12207 initiative. Abstract layered interfaces and meta models are going to make communications between components language- and implementation-independent. I have two problems with all this. First, one could have made the same arguments (i.e., if only we had Standards) for object libraries or even procedure libraries. That is, I don't see anything about what components provide that is sufficiently different so that the impediments to reuse that procedure libraries and object libraries faced will be overcome. The second problem I have is that it seems all too complicated with meta models of meta models and whatnot. When I observe the Passing Parade, I can only find one area where reuse has been enormously successful -- device drivers. I can run a wide variety of programs on my computer that, say, put stuff on the screen or talk to my voltmeter. These programs are may be written in different languages and have very different purposes so that there may be no similarity between them at all. But they all (re-)use the same device drivers to talk to the CRT or the VXI bus. Nowadays an application on my machine can even talk to a device driver on an entirely different platform. Thus it seems to me that if one wants to figure out how to make reuse work, the first place one should look for a paradigm is to examine device drivers. And what does one find? Highly modular functionality, strong firewalls isolating implementations, and very simple procedural or message interfaces rather than complex layered abstractions and meta models. So what is my point in opening this thread? Well, when I look at S-M I see highly modular functionality in Domains. When I look at wormholes, I see either a basic procedural interface (synchronous) or a very simple message interface (asynchronous events). I also see a firewall that prevents either side from being intimate with the other. Application writers who have to port their applications quickly learn to access device drivers through a glue layer because the reality is that interfaces cannot be standardized from one environment to another so that the client's view will always match that of the service. In S-M we call this glue layer a Bridge. The opportunity for rigorously defining large scale reuse has always been one of the most important qualities of S-M for me. The problem is that I recognize opportunities for reuse (i.e., domains) primarily via functional decomposition rather than through the mystical process of identifying Subject Matters. Thus my hope is that whatever guidelines are developed for identifying domains will accommodate the use of domains for large scale reuse. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: Re: (SMU) Domains Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- I agree with all that Lahman said, so I'll try not to repeat it. I'd like to expand on the nature of "subject matter" (which, as Lahman has said, is not well defined) and on the inadequacy of the domain chart for representing the system-dependent relationships between domains. My golden rule is that if I'm having difficulty expressing something simple, then I'm using an incorrect abstraction. A domain is, therefore, a closure within which all simple concepts can be described simply. A domain is not an implementation unit; thus the term "implementation domain" is inappropriate. There is a difference between an implementation interface (firewall) and a domain boundary. The domain chart does not make this distinction. Most people partition an problem using horizontal and vertical slices. Vertical partitioning allows a single abstraction layer to be split into distinct "components" Horizontal partitioning splits a distinct component into layers that span from the application abstraction to a platform abstraction. If you attempt to apply these same concepts to an SM domain chart then you quickly find problems. A naive domain chart has an application at the top, then service domains, then the architecture and finally some implementation domains at the bottom. These domains are shown connected by bridges. This two dimensional picture is misleading. There is no difference between these differently classified domains. The different classification is a result of the relationships between domains - i.e. the bridges. There are at least two fundamentally different types of bridge. The first, characterised by message passing, is typically a mapping of wormholes and types between two domains. They are generally 1:1; but sometimes a bridge may broadcast a wormhole to many destination domains. This type of bridge may exist between any two domains, including those that are normally charcterised as architectural. The client domain does not care. The second type of bridge may be characterised by the subsumption of the client into a server. The bridge provides a meta-mapping between the formalism of the client and the model in the server. The client model (and its population) is thus mapped only the population tables of theb server. A server will often be populated from many clients; and the information in the client may populate many servers. Bridges must also be mapped using a meta-bridge. The process of recursive design may be defined as the reification over meta-bridges and the consequent removal of client domains from the design-model. The end result is a set of implementation domains. Ultimately, the entire domain chart is subsumed into a [single?] table: computer memory! (The RD usually stops before this point is reached - an operating system will generally exist that performs the mapping of relocatable image files onto the system memory; and linkers/compilers will do the previous bits). As with message bridges, meta bridges may exist between any two domains. There is an obvious restriction that there must be no loops. Meta bridges do more than just map an application onto an architecture. I can find examples where two domains are connected together using a mixture of both types of bridge. A domain chart must express more than just message passing bridges from the application model to the platform model. It is currently used to express both the structure of messaging and the structure of RD. With no notation to differentiate these uses, it becomes difficult to use. We do not yet have the techniques that are required to successfully exploit complex RD structures. Thus there is a tendency to draw domain charts that show only messaging; and then put a big domain at the bottom called "architecture" with an ill-defined arrow leading to it. Finally, I can provide an explaination of implementation domains that does not violate my "simple concepts are simple" rule: an implementation domain is a pre-collapsed set of "proper" domains (which may have only a virtual existance). Dave. Not speaking for Mitel Semiconductor. -- Dave Whipp, Embedded Systems Group, Mitel Semiconductor, Plymouth, United Kingdom. PL6 7BQ. tel. +44 (0)1752 693277 mailto:david_whipp@mitel.com fax. +44 (0)1752 693306 http://www.gpsemi.com Subject: Re: (SMU) Domains Allen Theobald writes to shlaer-mellor-users: -------------------------------------------------------------------- > Most people partition an problem using horizontal and > vertical slices. Vertical partitioning allows a single > abstraction layer to be split into distinct "components" > Horizontal partitioning splits a distinct component into > layers that span from the application abstraction to a > platform abstraction. I guess I am not with most people! :) Unless I am and don't know it. I would like to see an example table using this approach. Thanks for the insight. Kind Regards, Allen Subject: Re: (SMU) Domains lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > A domain chart must express more than just message > passing bridges from the application model to the platform > model. It is currently used to express both the structure > of messaging and the structure of RD. With no notation > to differentiate these uses, it becomes difficult to use. > We do not yet have the techniques that are required to > successfully exploit complex RD structures. Thus there is > a tendency to draw domain charts that show only messaging; > and then put a big domain at the bottom called "architecture" > with an ill-defined arrow leading to it. As it happens we have gotten out of the habit of even drawing the bridges to Architectural and Implementation domains on our charts. However, our justification was more simple-minded: these domains tended to be ubiquitously connected so the bridges added little insight and because we have lots of service domains the chart becomes unreadable. It is nice to discover that we actually have a more rigorous justification for leaving the bridges off. I agree that the bridges into the Architectural and Implementation domains do not adequately reflect what is going on, especially when trying to map them in a 2-D space. But now that you have pushed me into thinking about this, I think I would move in the opposite direction -- to leave those bridges off the OOA Domain Chart entirely rather than adding a new chart notation. In fact, I think the Architectural and Implementation domains do not belong there unless they represent realized elements that would always be in any implementation of the given application but would probably not be in most other applications. I have always been uncomfortable about putting things like C++ or MFC or ObjectStore as implementation domains because we should be able to port the application into situations where other mechanisms are possible, even if we don't currently plan to do that. I would much rather see something like Source Language or GUI Interface or Data Store Management as the domains. But at this level of abstraction they are trivialized since, given the nature of most of today's applications, one is merely being an Apostle Of The Obvious to state them. It seems to me that your Bridges of the Second Type are really artifacts of the RD. If so, then they should appear in a separate RD notation that provides the links between the abstractions of the OOA and the reified, concrete implementation. Similarly, I don't think Implementation domains and, possibly, the Architectural domains belong in the Domain Chart abstraction if they are merely surrogates for the architecture's mechanisms -- rather they belong in some sort of OOA of Architectural Mechanisms to which the Bridges of the Second Type connect in the RD description. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: RE: (SMU) Domains Todd Cooper writes to shlaer-mellor-users: -------------------------------------------------------------------- [2nd try at posting] > An issue that pops up with some frequency is: How do we > come up with a good domain chart? > > Given that this is a cornerstone of the method, I have > several questions for you: > (1) Is this really an issue? A technical issue? > An issue of uncertainty? An issue of presentation? Absolutely, yes. It is an issue, exacerbated by the fact that "good" is the consequence of a delicate balancing act between numerous competing interests from both technical and non-technical aspects of a project, and over time, multiple projects as well as project teams. Presentation can be a problem, though orthogonal representations or client/server matrices usually do the trick. Many times, though, people don't have a clear understanding of the four layers and how domains interact over layer boundaries (e.g., Many times when I show that a service domain can have a direct bridge with a platform domain and explain the ramifications, the reaction is a bit of a stunned "You can do that?!"). "uncertainty" always plagues practitioners when they are dealing with multiple uncertainties and issues which are hard to quantify, analyze and 'engineer' through. For many companies, processes for product definition and specification followed by leveled sets of development plans, etc., are still pretty young (i.e., most companies are just starting ratchet up to SEI Level 3+, and approach a 4). The domain chart is the artifact most affected by project development processes, resources, reuse, etc., the stuff that makes software engineers jump out of bed in the morning! Or was that sit up straight with a cold sweat in the middle of the night - screaming? > (2) Assuming it's an issue, what exactly is the problem? > Starting a domain chart? Knowing you have a good one? > A need for a library of hints or patterns? Testing? > (3) Assuming we can identify the issues, what would help? > A paper describing "how to" come up with a domain chart? > A conversation on e-SMUG? A 'guide' to domain charts? (2) & (3) sound great - when do you think you can have them ready for us? (-;# Domain Interaction/Scenario Diagrams are definitely a must... > (4) Given all that, do you have any insights that really > made you 'grok' separation of subject matters that you > might want to share? > > A lot of questions, I know, but I've been sitting on this > for a while, and you all have been quiet lately ;) > > Any ideas or comments would be appreciated. > > -- steve mellor > Subject: Re: (SMU) Domains Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- [this message may serve as a response to Allen Theobald's post, though that isn't its primary purpose] Lahman wrote: > I agree that the bridges into the Architectural and Implementation > domains do not adequately reflect what is going on, especially when > trying to map them in a 2-D space. But now that you have pushed me > into thinking about this, I think I would move in the opposite > direction -- to leave those bridges off the OOA Domain Chart > entirely rather than adding a new chart notation. You may be write about not always including the architecture. However, I believe that both types of bridges are needed in a system. Let me give a simple example. As I have mentioned before, out application is a model of a microcontroller. Our domains include: models of components in the microcontroller (e.g. UART, DMAC, ARM processor); a model of the bus architecture of the system (this isn't an SM architecture - but see below for opposing view); and a top level domain: the microcontroller itself. This top level domain can be very simple, because lower level domains do most of the work. Indeed, in simple cases it is nothing more than a static specification domain (no state models). The build processes uses the microcontroller configuration to determine the populations of the lower level domains. Examples of information to be mapped includes the memory map, interrupt channel usage and dma trigger routing. > It seems to me that your Bridges of the Second Type are really > artifacts of the RD. If so, then they should appear in a > separate RD notation that provides the links between the > abstractions of the OOA and the reified, concrete implementation. I believe that the mappings of populations are a necessary part of the system description. The fact that I'm using RD (rather than reading in a run-time config file, or other mechanism) is not relevant. The information supplied by the top level domain is required; and building an API using wormholes is a tedious and pointless exercise. You may argue that this isn't an example of a meta bridge; just an aspect of "normal" bridges that is not properly explored in PT literature. That may be true; so I can also provide an example of meta bridges more deeply embedded in the system. The models that describe the functionality of components use attributes to store information. This information is accessed using the microcontroller's on-chip bus. It is possible (and useful) to construct a table that describes the mapping of attribute-instances onto bitfields within registers. This information is a necessary part of the system description. The simplest approach is to embed the information in a meta- bridge. When the information is read (using the bus), the client domain does not need to know (thus no wormholes to build and test). The reverse bridge - when the bus is used to change information - does require wormholes because the client must take responsibility for changing its attributes. The effect is that the bus model takes responsibility for providing the attributes of component domains; i.e. it is a mini-architecture. However, the system is built using a more generic architecture. The simplest way of achieving this is for a script to read the domain description file; rip out the attributes and replace them with wormholes; and then to pass the modified file to the main code generator. And this leads to another important aspect of identifying domains: if there is a repeating pattern within a domain (which the attribute-wormholes would be) then it is often usful to embed this pattern in a script: the script is a manifestation of one domain; the script's input is another. If you're not careful then this approach can lead to fragile systems; but I beleive that once we properly understand the limits of RD then the fragility can be overcome. Dave. Not speaking for Mitel Semiconductor. -- Dave Whipp, Embedded Systems Group, Mitel Semiconductor, Plymouth, United Kingdom. PL6 7BQ. tel. +44 (0)1752 693277 mailto:david_whipp@mitel.com fax. +44 (0)1752 693306 http://www.gpsemi.com Subject: RE: (SMU) Domains "Lynch, Chris D. SDX" writes to shlaer-mellor-users: -------------------------------------------------------------------- >Todd Cooper writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >[2nd try at posting] > >> An issue that pops up with some frequency is: How do we >> come up with a good domain chart? >> >> Given that this is a cornerstone of the method, I have >> several questions for you: >> (1) Is this really an issue? A technical issue? >> An issue of uncertainty? An issue of presentation? > >Absolutely, yes. It is an issue, exacerbated by the fact that "good" is >the consequence of a delicate balancing act between numerous competing >interests from both technical and non-technical aspects of a project, and >over time, multiple projects as well as project teams. Thanks, Todd for hitting the nail on the head for me! But I'd like to go a bit further, elaborating on something brought out by Lahman's response on this: the domain chart suffers from "mission overload". I have no problem with the domain chart as a map of "subject matter dependency". But it really starts to get unwieldy when it takes on the roles of: * a) system design document : all "deliverables" are shown with all their interrelationships, including things like purchased s/w packages, reusable subsystems, and programming languages. * b) software development plan: the sequence of all domain analysis steps and bridge design is spelled out for the "complete system". Because it all looks so clean, it creates a temptation to pick milestones based on finishing domains and simulating them. But it is not easy to see where to put in architecture effort, where to prototype, or what the opportunities for subcontracting might be. Throw in "software process" concerns, QA questions, and reusable "domains", and you have way more than a half day's work (especially if you have 50+ domains). To even begin to talk about what a good domain chart is we have to decide what it is we don't expect it to do. -Chris ------------------------------------------- Chris Lynch Abbott Ambulatory Infusion Systems San Diego, Ca LYNCHCD@HPD.ABBOTT.COM ------------------------------------------- Subject: Re: (SMU) Domains peterf@pathfindersol.com (Peter J. Fontana) writes to shlaer-mellor-users: -------------------------------------------------------------------- At 08:00 PM 4/22/98 -0700, shlaer-mellor-users@projtech.com wrote: >Steve Mellor writes to shlaer-mellor-users: >-------------------------------------------------------------------- >An issue that pops up with some frequency is: How do we >come up with a good domain chart? > >Given that this is a cornerstone of the method, I have >several questions for you: > (1) Is this really an issue? Absolutely - our experience shows that an improperly partitioned domain model is the most significant impediment to beginners trying to reach their first success, and to experienced projects who have nagging issues they can't seem to find the root cause of. We find that a poor domain model can make bridging quite difficult, especially with legacy (hand-coded) and off-the-shelf components. Bottom-line: virtually every client we've consulted to that had analysis in place (not a start-up project) has had significant problems with their domain model. This includes some very experienced teams. I'll go out on a limb here an speculate that over 25% of ALL projects currently applying OOA/RD to deliver a "real" system have significant problems with their domain models. > (4) Given all that, do you have any insights that really > made you 'grok' separation of subject matters that you > might want to share? While we were working the Teradyne ATB folks, we developed a paper titled "The OOA/RD Software Engineering Process". There is considerable emphasis on domain modeling. If anyone would like a copy of this in .RTF (rich text format) for Word, please email me and I'll send it to you. _______________________________________________________ Pathfinder Solutions Inc. www.pathfindersol.com | 888-OOA-PATH | | effective solutions for software engineering challenges| | Peter Fontana voice: +01 508-384-1392 | peterf@pathfindersol.com fax: +01 508-384-7906 | _______________________________________________________| Subject: (SMU) Time spent Domain Modeling Steve Mellor writes to shlaer-mellor-users: -------------------------------------------------------------------- Everyone: A point of clarification on the time that should be spent on domain modeling: We believe that the 'no more than half a day on the domain chart' comment applies only to the _initial sketch_. The reasoning for this is simply that you have very little information on the average domain chart. Assuming some number of domains (10) and bridges (20) say, that's only 30 pieces of information, and agonizing over the chart IN THE ABSENCE OF ANY REAL INFORMATION is a mistake that turns the domain chart into a bone of contention and not illumination. With the initial domain chart we can begin object blitzing in each of the domains. This often reveals misunderstandings of the domain chart (one wafer manufacturing project we did years ago had a Wafer object in half a dozen domains--Gong!) that will help in reconstructing/adjusting the domain chart before team members become over-invested in it. (This is one of the reasons why our materials show 'typical objects'.) Clearly, a project experienced in domain charting, information modeling, and the subject matters at hand can usefully spend more time on the initial domain chart, and gain from reducing the number of iterations between object blitzing and domain charting. And this will be especially true if you ALSO have a lot of domains. But if you don't have this experience, I continue to believe that the first cut shouldn't take long. As Chris Lynch points out, the Domain Chart is the guide for a lot of project-wide and system-wide issues. (I don't entirely go along with the 'mission overload' assertion, but that's another point.) In particular, the technical lead/ project manager will use assumptions/requirements pairs to adjust and refine the domain boundaries throughout the project_. As such, the TL/PM virtually lives with the domain chart. -- steve Subject: Re: (SMU) Domain Modeling Prep Tool Kenneth Cook writes to shlaer-mellor-users: -------------------------------------------------------------------- At 08:06 AM 4/28/98 -0700, Steve Mellor wrote: >Steve Mellor writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >With the initial domain chart we can begin object blitzing >in each of the domains. This often reveals misunderstandings >of the domain chart (one wafer manufacturing project we did >years ago had a Wafer object in half a dozen domains--Gong!) >that will help in reconstructing/adjusting the domain chart >before team members become over-invested in it. (This is >one of the reasons why our materials show 'typical objects'.) > >Clearly, a project experienced in domain charting, information >modeling, and the subject matters at hand can usefully spend >more time on the initial domain chart, and gain from reducing >the number of iterations between object blitzing and domain >charting. And this will be especially true if you ALSO have >a lot of domains. But if you don't have this experience, >I continue to believe that the first cut shouldn't take long. > While reading this, I imagined a tool, similar to a tax preparation tool, that guided a group through the domain charting/object blitz process. Asking questions, checking "dictionaries" for possible subject matter breaches, offering common domains, etc. This would be aimed at groups which are not experienced in domain charting. Now if only as many units could be sold as of Turbo Tax :-) Ken Cook Softwire Corp ken.cook@softwire.com http://www.softwire.com Subject: Re: (SMU) Domains lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > You may be write about not always including the architecture. However, > I believe that both types of bridges are needed in a system. Let me > give a simple example. I agree completely that both types of bridges are there and need to be recorded somewhere. I just don't think they both go on the Domain Chart. > I believe that the mappings of populations are a necessary part > of the system description. The fact that I'm using RD (rather than > reading in a run-time config file, or other mechanism) is not > relevant. The information supplied by the top level domain is > required; and building an API using wormholes is a tedious and > pointless exercise. I find the second sentence interesting. I would regard reading in a run-time config file to be a purely architectural mechanism. The crucial thing to me is that S-M leaves the initial population of domains unspecified. To me this means it is regarded as an issue for the RD specification. Since I agree those populations are crucial to the overall system, I think the thing that is lacking is a proper, rigorous notation for specifying the RD. And if we are going to add a notation for a new type of bridge, I would just prefer it to be in an RD specification than in the Domain Chart. > You may argue that this isn't an example of a meta bridge; just > an aspect of "normal" bridges that is not properly explored in > PT literature. That may be true; so I can also provide an example > of meta bridges more deeply embedded in the system. Meta bridge I'm not so sure about (it seems that everywhere I look nowadays everything has been metastasized to the point where I don't know what the word means anymore). But I agree it is certainly a different critter than the "normal" bridges on the Domain Chart. > The models that describe the functionality of components use > attributes to store information. This information is accessed > using the microcontroller's on-chip bus. It is possible (and > useful) to construct a table that describes the mapping of > attribute-instances onto bitfields within registers. > > This information is a necessary part of the system description. > The simplest approach is to embed the information in a meta- > bridge. When the information is read (using the bus), the > client domain does not need to know (thus no wormholes to > build and test). The reverse bridge - when the bus is used > to change information - does require wormholes because the > client must take responsibility for changing its attributes. I think I am missing something here. I assumed that the models were populated with attribute values and register mappings for a particular application (microcontroller). If so, I am not sure why the reverse bridge is necessary -- unless the microcontroller is the hardware analog of Forth and can modify its circuitry during operation. So I better get this straight before going on. Subject: (SMU) Domains Steve Mellor writes to shlaer-mellor-users: -------------------------------------------------------------------- Everyone, Paul Higham of Nortel ("Paul Higham" ) has tried to post this on the group, but to no avail. Hence, I am posting it for him, and we're following up on the source of the problem. -- steve mellor -------------------------message follows----------------- 1998 Apr 29 at 17:59 To: shlaer-mellor-users@projtech.com (BNR400) From: Paul (D.P.) Higham :6S52-M (BNR) MTL BNR Subject: Domains For a definitive textbook on the Theory of Domains I believe that several avenues need to be explored. I would see the following as potential 'chapters' in such a textbook: 1. The nature of domain 1.1 Characterization What is needed is a definitive set of criteria for deciding when a group of objects deserves to be a domain in its own right. As pointed out by Lahman and Whipp, the term 'subject matter' is insufficiently precise from which to derive any workable set of criteria. 1.2 Classification There are different types of subject matter and the existing classification of domain into {client, server, application, architecture, realized} could be made richer. 2. The nature of a bridge 2.1 Characterization Again a matter for precise definition, is a bridge a single wormhole or simply a ordered pair of domains with wormholes, services, requirements etc. as attributes? 2.2 Classification As Dave Whipp has pointed out there are different kinds of bridge. Bridges into an architecture have a fundamentally different nature from those between non-architectural domains. Surely there are other kinds? 3. The topology of a domain chart We know that a domain chart is a directed graph: nodes are domains and edges are bridges, but I don't think we have a clear statement on the following questions: 1. Is the domain chart a directed acyclic graph? All the examples produced by Project Technology's papers have no cycles but some legitimate examples produced by Kennedy Carter's literature do. 1.1 Does a procedure exist for removing these cycles? 1.2 Should a procedure exist for removing these cycles? 2. If the answer to Question 1. is YES then it can be shown that there must exist domains that are not servers, i.e., have no bridge into them from a client. If we call these pure clients, then must a domain chart have a unique pure client? <><<>><> Clearly there are other chapters and sections, but enough of the nosebleed view for now. I would like to offer an analogy that I sometimes find helpful. Consider a house as the analog of a system. If you are living in the house, or wanting to buy or sell one, or put your children to sleep in one then you would manage the complexity of "houseness" by dividing it up into rooms according to their function: bathroom, kitchen, football-watching room, entropy closet, etc. One might liken this to a functional decomposition with well- defined interfaces (doorways) between components. However this is not the way that houses are built, otherwise you could easily have ABS drains in the bathroom and PVC in the kitchen. The general contractor will typically subcontract out the plumbing, the electrical work, the carpentry, the plastering, the roofing, and probably a whole host of other things that have a compelling likeness to the concept of a domain. What I glean from this analogy is that a domain (or at least one TYPE of domain, ref. 'chapter' 1.2 above) may have the following properties: 0. A domain makes assertions about system CONSTRUCTION not system USE. 1. There are identifiable experts that know about the domain. 2. There exist training programs that can produce these experts. 3. There are specialized tools that the experts use. 4. There are standards that can validate the quality of work in a domain. 5. The domain is reusable in other systems. 6. The services provided by a domain are well enough documented as to be understood by all the other system constructors. Can anyone out there add to this list, delete from this list, or expand on items in the list? Paul Higham NORTEL Subject: Re: (SMU) Domains Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- Lahman wrote > Responding to Whipp... >> I believe that the mappings of populations are a necessary part >> of the system description. The fact that I'm using RD (rather than >> reading in a run-time config file, or other mechanism) is not >> relevant. The information supplied by the top level domain is >> required; and building an API using wormholes is a tedious and >> pointless exercise. > I find the second sentence interesting. I would regard reading in > a run-time config file to be a purely architectural mechanism. And I would agree. The mechanism is not important for the domain chart. The important fact is that there is an transfer of information from one domain to the other. This interaction must be shown on the domain chart as a bridge; and be specified in the bridge description (the description should be independent of the mechanism - but the fact that the interaction isn't a wormhole, is an OOA issue as well as an RD issue.) > The crucial thing to me is that S-M leaves the initial > population of domains unspecified. I will have to take your word for this. Our models require a template population. Although models of components have potential for interesting configurations, real hardware tends to have less variables for component instantiation. The templates describe the fixed aspects; whilst allowing variable characteristics to be described. A complete chip has even less flexibility, as the top level configuration flows down to populate the templates. The code generator uses the model's population to generate the final code. > To me this means it is regarded as an issue for the RD > specification. Since I agree those populations are crucial > to the overall system, I think the thing that is lacking is > a proper, rigorous notation for specifying the RD. If the population is a necessary part of the system, then it cannot be specified purely as part of the RD. RD is just one possible process for producing code from a model; OOA does not exclude elaborative techniques. By confining the population information to the RD, you would be excluding it from other devleopment processes. The exclusion of other development processes would not be too important. However, the incorrect placement of the information ("one fact in wrong place" ;-) ?), would distort the definition of the RD process; potentially increasing its complexity and reducing its utility. Experience tells me that the incorrect placement of information in a translation process very quickly leads to fragility. It appears to be an inherrent instability in the translation process. Minor defects quickly mushroom. If a solution to this problem is not found then the full potential of translation will not be possible and we will be restricted to the monolithic architecture. The distribution of translation throughout a system will not be possible. >> [...] When the information is read (using the bus), the >> client domain does not need to know (thus no wormholes to >> build and test). The reverse bridge - when the bus is used >> to change information - does require wormholes because the >> client must take responsibility for changing its attributes. > I think I am missing something here. I assumed that the > models were populated with attribute values and register > mappings for a particular application (microcontroller). If > so, I am not sure why the reverse bridge is necessary -- > unless the microcontroller is the hardware analog of Forth > and can modify its circuitry during operation. So I better > get this straight before going on. The trouble with simple examples is that, when I start oversimplifying things, information is lost. When we start getting into discussions about it, the focus of the thread can start to drift. I'll try to stear clear of application dependent concepts. In brief, the situation is: 2 domains, one implementing the attributes of the other. However, both domains may wish to initiate a modification of the value of an attribute (the "register" domain detects bus accesses to the register; thus requiring a modification of the attribute; and the "peripheral" domain may modify the value as part of its behaviour). One way to avoid circular situations is to ensure that the architectural bridge is one-way; but that an additional bridge exists to transfer messages in the other direction. Its difficult to know how to describe this in SM; I tend to use the term "reverse bridge" for the bridge that passes messages from the architecture to the client. Dave. Not speaking for Mitel Semiconductor. -- Dave Whipp, Embedded Systems Group, Mitel Semiconductor, Plymouth, United Kingdom. PL6 7BQ. tel. +44 (0)1752 693277 mailto:david_whipp@mitel.com fax. +44 (0)1752 693306 http://www.gpsemi.com Subject: Re: (SMU) Domains lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Higham... > 1. Is the domain chart a directed acyclic graph? > > All the examples produced by Project Technology's papers > have no cycles but some legitimate examples produced by > Kennedy Carter's literature do. > > 1.1 Does a procedure exist for removing these cycles? > 1.2 Should a procedure exist for removing these cycles? An interesting question. I believe the answer is that the graph will have cycles. The reason I feel this way is that I think there are situations where individual domains play distinctly different roles during an execution. If this is true, then it follows that their relationships with other domains may change as the roles change. Thus it is conceivable that the client/service relationship could reverse, depending upon the nature of the current roles of the domains. Given this logic, the key question becomes: can domains play different roles? I think I can demonstrate a real case where this is true. We build software for testers that is composed of plug & play bundles of software where each bundle is a standalone product composed of multiple domains. Now the core element is a Mongo Device Driver (MDD), which is a standalone product. But it can also be used in conjunction with our Digital Diagnostics (DD) package, which is another standalone product. When we put these two products together to form a new product we have to merge two Domain Charts, each having an Application Domain. The simplistic view is that one has to be a service to the other when combined. The MDD is the logical choice to run the show because it is needed whether the tests fail or not. Thus the MDD invokes the services of the DD whenever a test fails. So far, so good. The problem arises when there is a failure and the DD gets control. It wants to run the the test again to collect specific data. It is built to invoke whatever software is running the tester as a service. That is, it wants to invoke the MDD as a service. The bottom line is that MDD is built to execute tests and collect data in response to external requests and it is also built to invoke an external diagnostic service (if one has been registered) in the event of a failure. Similarly DD is built to take control and invoke the available test execution software as a service. Essentially the role played by MDD changes as soon as a failure occurs. The obvious question is: why not introduce a new Application Domain for the combined product to which both MDD and DD are services? The answer is that it is unnecessary. By the nature of the programmatic interfaces all the communications can be taken care of in the bridges because the standalone versions of MDD and DD already have all the interface functions that the other domain needs. Thus we make the combined product by simply gluing them together with bridges. > 2. If the answer to Question 1. is YES then it can be shown > that there must exist domains that are not servers, > i.e., have no bridge into them from a client. If we call > these pure clients, then must a domain chart have a > unique pure client? Yes, this clearly follows. A corollary question is: may a Domain Chart have more than one pure client? -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: (SMU) Re: Your Message Sent on Thu, 30 Apr 1998 17:02:49 +0200 blunden@ens.ascom.ch (Blunden Iain) writes to shlaer-mellor-users: -------------------------------------------------------------------- rom amc02@aub.edu.lb Thu Apr 30 16:40:37 1998 X-Authentication-Warning: projtech.projtech.com: majordom set sender to owner-shlaer-mellor-users@projtech.com using -f From: amc02@aub.edu.lb Date: Thu, 30 Apr 1998 17:02:49 +0200 (EET) To: shlaer-mellor-users@projtech.com amc02@aub.edu.lb writes to shlaer-mellor-users: -------------------------------------------------------------------- signoff A very starnge message to receive from a discussion forum !!!! Is this trying to tell me something which I don't know !!!! Subject: Re: (SMU) Domains lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > In brief, the situation is: 2 domains, one implementing > the attributes of the other. However, both domains may > wish to initiate a modification of the value of an > attribute (the "register" domain detects bus accesses > to the register; thus requiring a modification of the > attribute; and the "peripheral" domain may modify the value > as part of its behaviour). > > One way to avoid circular situations is to ensure > that the architectural bridge is one-way; but that an > additional bridge exists to transfer messages in the other > direction. Its difficult to know how to describe this in > SM; I tend to use the term "reverse bridge" for the bridge > that passes messages from the architecture to the client. OK. I assume "peripheral" contains the attributes that "register" implements (via register mappings) and that "register" is an implementation domain while "peripheral" is a service domain. If that is the case, then it seems to me that the legerdemain with scripts and wormholes is not very relevant to the OOA itself -- the meta bridge is just another translation infrastructure, akin to a multi-pass compiler. We do a similar thing, admittedly not the same thing, by having write accessors for certain attributes automatically generate bridge calls into another domain to do register writes. In the client OOA one just sees a write accessor, but the implementation is really a wormhole. This has, IMHO, a disadvantage in that one does not know which attributes reflect this behavior when looking at the client OOA. Worse, there is a bridge on the Domain Chart that has no manifestation in the domain OOA. The facts of this special communication are buried in the specific accessor colorations. Like you, I would prefer to have higher level notation that describes this special communication. As an Analyst, I might make use of the information when building the OOA, but I would still regard it as a specification for the RD translation rules. [I belong to the school that believes Analysts do OOAs and specify translation rules while Architects build translation infrastructures. Since the Analyst is doing both the OOA and the RD specification, there is no left hand/right hand problem.] I would like the colorations to be consolidated and abstracted in the specification -- I want to be able to go to a single, predefined place in the spec to examine the information rather than rummaging through the translation itself. Similarly, if I had dichotomous bridges such as yours, I would want a central place where I would routinely find such information. Actually -- and here we come closer together -- I would not get too bent out of shape if there were simply a different view of the attributes that represented the RD perspective. For example, if I, say, right click on the attribute I get an RD dialog to view/specify translation information. My point is that tool engineering can merge the information fairly seamlessly, but I still want that subtle distinction between the RD specification and the OOA specification. So, to summarize, I think we agree pretty much on the following: (1) Bridges (and possibly other elements) can have special characteristics that are relevant to the system description. (2) There is no formal notation in either the OOA nor the RD to describe these characteristics. (3) Therefore a new notational enhancement is desirable to capture such things. and we disagree on whether this notational enhancement goes in the OOA Domain Chart or into some sort of higher level RD specification. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: (SMU) Posting Problems "Ralph L. Hibbs" writes to shlaer-mellor-users: -------------------------------------------------------------------- Hello All, The recent threads of discussion has created posting interest from some infrequent contributors. The result has been difficulties in posting. I'll explain the most likely cause and some possible solutions. Many months ago we installed some filters on the mailing list to better control spamming to the list. The main filter was one that only allows subscribers to post to the list. While this might reduce discussions a bit, we felt the reduced spamming was a higher priority. This creates a problem for people who move or modify their email address. When people move or have their email addresses changed, most corporations create some invisible forwarding mechanisms. These forwarding mechamisms ensure you receive the ESMUG postings, but does not help your posting. Posting is only allowed from the "subscribed" email address. CORRECTION OPTIONS 1) The simpliest is to resubscribe to ESMUG. If your subscription request is accepted, then you should be able post with no problem. If you end up receiving two copies of the postings, then unsubscribe your old email address. 2) Try sending yourself a test message. In your mail program, tell it to show you all the details associated with the message. In the details check your headers to see if your addresses all match. Sometimes people will have different alias defined, which will confuse the list manager. 3) If 1 & 2 fail, then report your problem to "support@project.com". Our support team will do some investigation as they have time. Please respect that list management problems are one of their lower priority tasks, so their responses may be a slow. Sincerely, Ralph ----- Shlaer-Mellor Method: Real-Time Software Development ------ Ralph Hibbs Tel: (510) 567-0255 ext. 629 Director of Marketing Fax: (510) 567-0250 Project Technology, Inc. email: rlh@projtech.com 10940 Bigge Street URL: http://www.projtech.com San Leandro, CA 94577 -------------------- Real-Time, On Time ------------------------- 'archive.9805' -- Subject: Re: (SMU) Domains? "Paul Higham" writes to shlaer-mellor-users: -------------------------------------------------------------------- I think I should perhaps rephrase the original question as the following two questions: 1. Is it always possible to produce an acyclic domain chart for a given system? 2. Is there sufficient software maintenance/construction advantage in constraining the domain chart to be acyclic? Freely (and possibly unfairly!) recasting Lahman's reply to the original question, I would claim that question 1 remains unanswered, and guess that his reply to the rephrased question would be 'no', because we can conceive of situations in which there is a cyclic alternative. Note that his example offered an acyclic option for the domain chart. Thus I did not read that some domain charts must NECESSARILY have cycles, but instead that the cyclic alternative offered some convenience. Claire Cote pointed out to me that domains belonging to a cycle of length two (domain A provides services to domain B and domain B provides services to domain A) would have greater difficulty in passing the substitution or replacement test since it must not only provide certain services but also be able to use the ones from the domain it is serving. I strongly suspect that other self-reflexive, but more subtle, issues would occur in cycles of length greater than two. I believe that the answer to question 1 is one but I am still working on a proof. Hopefully I can submit that soon. Paul <> <<>> <> In message " (SMU) Domains" sent on Apr30, shlaer-mellor-users@projtech.com writes: >lahman writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >Responding to Higham... > > >> 1. Is the domain chart a directed acyclic graph? >> >> All the examples produced by Project Technology's papers >> have no cycles but some legitimate examples produced by >> Kennedy Carter's literature do. >> >> 1.1 Does a procedure exist for removing these cycles? >> 1.2 Should a procedure exist for removing these cycles? > >An interesting question. I believe the answer is that the graph will have >cycles. The reason I feel this way is that I think there are situations where >individual domains play distinctly different roles during an execution. If this >is true, then it follows that their relationships with other domains may change >as the roles change. Thus it is conceivable that the client/service >relationship could reverse, depending upon the nature of the current roles of >the domains. > >Given this logic, the key question becomes: can domains play different roles? I >think I can demonstrate a real case where this is true. We build software for >testers that is composed of plug & play bundles of software where each bundle is >a standalone product composed of multiple domains. Now the core element is a >Mongo Device Driver (MDD), which is a standalone product. But it can also be >used in conjunction with our Digital Diagnostics (DD) package, which is another >standalone product. > >When we put these two products together to form a new product we have to merge >two Domain Charts, each having an Application Domain. The simplistic view is >that one has to be a service to the other when combined. The MDD is the logical >choice to run the show because it is needed whether the tests fail or not. Thus >the MDD invokes the services of the DD whenever a test fails. So far, so good. > >The problem arises when there is a failure and the DD gets control. It wants to >run the the test again to collect specific data. It is built to invoke whatever >software is running the tester as a service. That is, it wants to invoke the >MDD as a service. The bottom line is that MDD is built to execute tests and >collect data in response to external requests and it is also built to invoke an >external diagnostic service (if one has been registered) in the event of a >failure. Similarly DD is built to take control and invoke the available test >execution software as a service. Essentially the role played by MDD changes as >soon as a failure occurs. > >The obvious question is: why not introduce a new Application Domain for the >combined product to which both MDD and DD are services? The answer is that it >is unnecessary. By the nature of the programmatic interfaces all the >communications can be taken care of in the bridges because the standalone >versions of MDD and DD already have all the interface functions that the other >domain needs. Thus we make the combined product by simply gluing them together >with bridges. > >> 2. If the answer to Question 1. is YES then it can be shown >> that there must exist domains that are not servers, >> i.e., have no bridge into them from a client. If we call >> these pure clients, then must a domain chart have a >> unique pure client? > >Yes, this clearly follows. A corollary question is: may a Domain Chart have >more than one pure client? > >-- >H. S. Lahman There is nothing wrong with me that >Teradyne/ATB could not be cured by a capful of Drano >321 Harrison Av. L51 >Boston, MA 02118-2238 >(Tel) (617)-422-3842 >(Fax) (617)-422-3100 >lahman@atb.teradyne.com > > > Subject: Re: (SMU) Domains? lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Higham... > 1. Is it always possible to produce an acyclic domain chart for a given > system? > Freely (and possibly unfairly!) recasting Lahman's reply to the original > question, I would claim that question 1 remains unanswered, and guess that > his reply to the rephrased question would be 'no', because we can conceive > of situations in which there is a cyclic alternative. > > Note that his example offered an acyclic option for the domain chart. Thus > I did not read that some domain charts must NECESSARILY have cycles, but > instead that the cyclic alternative offered some convenience. You are correct, I would answer 'no'. I believe that it is a problem space issue that domains may reverse their relative roles in certain circumstances. If so, proper modeling requires that this be reflected in the domain chart -- so the model would be incorrect if it did not have cycles in those role reversal situations. Note that to eliminate the cycles one would have to redesign the internals of the domains or add a higher level domain that invokes the cyclic domains as services. The former is clearly unacceptable since it would make the domain internals dependent upon application context, which would defeat large scale reuse. I submit that adding the higher level domain obscures the nature of the role reversal -- the fact that either domain directly requires the services of the other domain is no longer evident. Worse, the real service request is actually moving opposite to the bridge's flow on the Domain Chart as it moves to that higher level domain. That is, the cycle still exists -- it has just be hidden from view. [Having said all this and because I use the example again below, let me anticipate sundry Astute Observers and point out that my example had a flaw. MDD does not have to invoke DD directly; it merely has to report that the test failed, which is not a service request. Some Higher Authority could invoke MDD and then invoke DD, which, in turn, would invoke MDD. This would provide an acyclic chart with MDD as a lower level service domain invoked by both Higher Authority and DD. In this situation MDD and DD would each only have one role to play. My counter is that the point I am making is that role playing at the domain level is a possibility and the example demonstrates plausibility for this hypothesis while being simple and realistic. While in this particular case it could be modeled without cycles, it is certainly plausible that there are other situations where this would not be the case. Note that if MDD is more sophisticated than just an instrument driver, say a Digital Test that is part of a Rack & Stack Test application, then it would be appropriate for it to invoke DD directly but DD would still need its services to re-execute the test -- but going into this just introduces red herrings about the use of other domains within Digital Test and whatnot.] > Claire Cote pointed out to me that domains belonging to a cycle of length > two (domain A provides services to domain B and domain B provides services > to domain A) would have greater difficulty in passing the substitution or > replacement test since it must not only provide certain services but also be > able to use the ones from the domain it is serving. I strongly suspect that > other self-reflexive, but more subtle, issues would occur in cycles of > length greater than two. I am not sure I see the difficulty with the replacement test. I agree that the details will inevitably be different -- the core reason why class reuse has failed -- but that is why the methodology provides the bridge formalism. I don't see anything that can't be fixed by supplying new bridge glue when one replaces domains. Going back to my original example and being somewhat more specific by dealing with guided probe diagnostics while making the simplifying assumption that MDD and DD are single domains I can make the following assertions: (1) Any MDD domain has to provide the service of re-executing a test. [As it happens there are other reasons for this besides diagnostics.] (2) Any MDD domain has to invoke the services of some diagnostics mechanism if there is a failure. [On a GoNoGo tester one would not have diagnostics, so the cycle issue would be nonexistent. Assume MDD is more sophisticated and is responsible for diagnosis as well as test from its client's view.] (3) Any DD domain has to provide the service of diagnosing failures. (4) Any DD domain doing guided probe has to invoke the services of a test driver several times to re-execute the test (one for each probe). So long as the domains provide and request their relevant, generic services, then it seems to me that they are replaceable except for the details of the bridge mechanisms. I should be able to run the same diagnostics on different testers and I should be able to run different diagnostics engines on the same tester. Each combination will be a different application but each application will have the same cycles and they differ only in the replacement of the domains. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: RE: (SMU) Domain Cycles "Lynch, Chris D. SDX" writes to shlaer-mellor-users: -------------------------------------------------------------------- (2nd post attempt -- sorry if duplicated. -- CDL) > >"Paul Higham" writes to shlaer-mellor-users: > > >-------------------------------------------------------------------- > > > >I think I should perhaps rephrase the original question as the > following two > >questions: > > > >1. Is it always possible to produce an acyclic domain chart for > a given > > system? > > > >2. Is there sufficient software maintenance/construction > advantage in > > constraining the domain chart to be acyclic? > > My answers for these are: > > 1: Yes -- if you want, you can merge the domains or (more > commonly) incorporate functionality of the lower domain into the > higher. > 2: Probably > > My take on this is as follows: > > 1. *Most* domain charts will be acyclic, because of the natural > hierarchy of subject matters. When this happens, it is Good, but it > is not what we're talking about. > > 2. Occasionally, there will be a mutual dependency. I have run > into this sometimes with service domains which are at the same level > of abstraction. For example, memory management and list management. > Your memory manager could be using linked lists and your linked list > manager could be requesting memory. This creates a cycle on the > domain chart. > > I think this is a signal to be cautious. For one thing, you > will need a clear understanding of how you will be avoiding an > infinite recursion of dependency. It will also lead to a level of > coupling between the domains which may exceed what you are willing to > put up with in developing a domain for reuse. In short, it may force > you to be clever. Choosing to be clever is one thing; being _forced_ > into it is something else again. For the example above, I would > construct a private linked list service for the memory manager and let > the list manager depend on the memory manager, removing the cycle. > > 3. As H.S. Lahman points out, over time, the service > relationship could reverse (although I have not personally done this.) > I think in this case the domain chart should be drawn separately for > each set of service relationships which can be in effect at one time. > This would highlight the temporal aspect and eliminate the possibility > of confusing roles and relationships. > > Regards, > > Chris > > ------------------------------------------- > Chris Lynch > Abbott Ambulatory Infusion Systems > San Diego, Ca LYNCHCD@HPD.ABBOTT.COM > ------------------------------------------- > Subject: Re: (SMU) Domain Cycles Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- Chris Lynch wrote: > 2. Occasionally, there will be a mutual dependency. I have run > into this sometimes with service domains which are at the same level > of abstraction. For example, memory management and list management. > Your memory manager could be using linked lists and your linked list > manager could be requesting memory. This creates a cycle on the > domain chart. This example seems to demonstrate the problems caused by domain pollution. A linked list domain that uses memory management has been polluted from its architecture. (There may still be a cycle on the domain chart though). It is very clumsy to model containers using explicit wormholes. This is not a problem. The correct way to model lists is to use the formalism of OOA. Containers in OOA are objects. Any bridge to a linked list should always be architectural; and the node of the list is defined by the mapped object. (However, the formalism really does need some extensions to allow an architectural exceptions. If there's no resource left then the create accessor will fail. How do we model this?) A model of a linked list should use objects (not memory) to store nodes (a value attribute will have its domain defined by the client). When a new node is required, a new instance is required. This should be created using a create-accessor, not a wormhole. The architectural domain that implements the linked list in a specific context may utilise a memory management domain. The memory management domain will model memory management. (actually, it may model resource management). It will not use linked lists directly. It will use relationships and objects. The architecture that reifies these relationships may require linked lists (and these lists may require memory management; however, it is likely that the memory-management architecture will use linked lists that are implemented using a different architecture. This recursion is grounded when an implementation domain is reached). This leads me to another question: If a domain is used in two different contexts on a domain chart; possibly using a different architecture in each use; should the domain be shown twice on the domain chart? Does this allow all cycles to be broken? Dave. Not speaking for Mitel Semiconductor. -- Dave Whipp, Embedded Systems Group, Mitel Semiconductor, Plymouth, United Kingdom. PL6 7BQ. tel. +44 (0)1752 693277 mailto:david_whipp@mitel.com fax. +44 (0)1752 693306 http://www.gpsemi.com Subject: Re: (SMU) Domain Cycles lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lynch.... > > 3. As H.S. Lahman points out, over time, the service > > relationship could reverse (although I have not personally done this.) > > I think in this case the domain chart should be drawn separately for > > each set of service relationships which can be in effect at one time. > > This would highlight the temporal aspect and eliminate the possibility > > of confusing roles and relationships. I think the Domain Chart should reflect a description of the entire Application. More particularly, it should reflect what the Application looks like through the entire duration of time for an execution of the application. It the roles change during that time, then that should be reflected in the Domain Chart. If two different Domain Charts are drawn, I would expect them to be two identifiably different Applications. There is, admittedly, a fair amount of quicksand under this position. Suppose a given Application behaves quite differently under different execution contexts, but only one execution context applies during a particular execution. It it one Application or two? Since the implementation has not changed an iota, I think one has to argue it is still one Application. Thus this situation would argue strongly for two Domain Charts. However, this opens up a new Pandora's box regarding the temporal scope of a Domain Chart. Or even whether the Domain Chart has a temporal scope. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 321 Harrison Av. L51 Boston, MA 02118-2238 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Subject: RE: (SMU) Domain Cycles "Lynch, Chris D. SDX" writes to shlaer-mellor-users: -------------------------------------------------------------------- > >Dave Whipp writes to shlaer-mellor-users: > >-------------------------------------------------------------------- > > > >Chris Lynch wrote: > >> 2. Occasionally, there will be a mutual dependency. I have > run > >> into this sometimes with service domains which are at the same > level > >> of abstraction. For example, memory management and list > management. > >> Your memory manager could be using linked lists and your linked > list > >> manager could be requesting memory. This creates a cycle on the > >> domain chart. > > > >This example seems to demonstrate the problems caused by domain > >pollution. A linked list domain that uses memory management > >has been polluted from its architecture. (There may still be a > >cycle on the domain chart though). It is very clumsy to model > >containers using explicit wormholes. > > > > > >This is not a problem. The correct way to model lists is to > >use the formalism of OOA. Containers in OOA are objects. > >Any bridge to a linked list should always be architectural; > >and the node of the list is defined by the mapped object. > >(However, the formalism really does need some extensions > >to allow an architectural exceptions. If there's no resource > >left then the create accessor will fail. How do we model > >this?) > > > >A model of a linked list should use objects (not memory) to > >store nodes (a value attribute will have its domain defined > >by the client). When a new node is required, a new instance > >is required. This should be created using a create-accessor, > >not a wormhole. The architectural domain that implements the > >linked list in a specific context may utilise a memory > >management domain. > > > >The memory management domain will model memory management. > >(actually, it may model resource management). It will not use > >linked lists directly. It will use relationships and objects. > >The architecture that reifies these relationships may require > >linked lists (and these lists may require memory management; > >however, it is likely that the memory-management architecture > >will use linked lists that are implemented using a different > >architecture. This recursion is grounded when an implementation > >domain is reached). > > I appreciate your insights into the solving the problem when you > get to start from scratch, and I agree that the domains from my > example are "polluted" by implementation. But I think that's life > when you're trying to reuse code or interface to an existing system. > That was the (unstated) environment in which I posed my example. My > impression is that the method was intended to allow such use. > > None of what I've said changes the fact that such reuse can be > enormously difficult. One of my significant past life challenges was > to develop OOA models to ride on top of a predetermined system and S/W > architecture, with black-box, non-OO components, anemic hardware, and > no support for translation. Luckily, that is behind me. > > -Chris > > ------------------------------------------- > Chris Lynch > Abbott Ambulatory Infusion Systems > San Diego, Ca LYNCHCD@HPD.ABBOTT.COM > ------------------------------------------- > Subject: (SMU) Shlaer-Mellor User's in NM? kozlowski@popler.lansce.lanl.gov (Thomas Kozlowski) writes to shlaer-mellor-users: -------------------------------------------------------------------- I am interested in hearing from or knowing of any Shlaer-Mellor users in the New Mexico area and how to contact them. I find it can be useful to have "local" people to talk to about common interests, problems, etc.; especially since we are a small group Thanks, Tom Kozlowski LANSCE-12 Los Alamos Neutron Science Center Email: kozlowski_thomas@lanl.gov Voice:505-667-7747 Fax:505-665-2676 'archive.9806' -- Tony Klein writes to shlaer-mellor-users: -------------------------------------------------------------------- Some time ago, someone posted a query to the list looking for a OOA model of a TCP/IP stack. I was off the list for a while and did not see if there was a response (I've been unable to get at the list archives, but that's a separate issue.) This is a re-query of that question, with a twist. Does anyone have an example of how to model an application interface to a TCP/IP stack, i.e. and Eternal Entity that represents a socket? I am attempting, merely as a learning exercise as part of a BridgePoint evaluation, to set up a test application that uses sockets and am looking for a helpful starting point. Regards, Tony Klein Tony Klein Principal Engineer kleint@NextNetworks.com NextNet, Inc. MPLS MN (612) 944-0252 bruce.levkoff@cytyc.com (Levkoff, Bruce) writes to shlaer-mellor-users: -------------------------------------------------------------------- As we enter the software validation phase of our project, we are finding that the most numerous class of errors with our modeling involve inadvertently omitted transitions and unexpected transitions (events sent even though the target was not in an appropriate state). Although we did review the models, it is clear our review process did not catch a variety of mistakes. I would like to hear of people's experience with analysis reviews and the techniques used (e.g, scenario development) and documents that were involved. Any comment on the perceived success of such reviews would also be appreciated. Bruce Bruce Levkoff Principal Software Engineer Cytyc Corporation 85 Swanson Rd. Boxborough,MA 01719 (P) 978-266-3033 (F) 978-635-1033 "Michael M. Lee" writes to shlaer-mellor-users: -------------------------------------------------------------------- Bruce, My first question, given the errors found, is did you include in your review of the state model, a review of the state transition table (STT)? Reviewing the STT forces you to assess all of the "ignore" and "can't happen" cells that you cite as not being modeled correctly. Carefully considering these cases frequently exposes possibilities not initially considered in the "normal" scenarios that typically drive development of the state model. WRT the success of such (SM & STT) reviews, I find them indispensable, in the development process. And their effectiveness can be greatly enhanced if you can identify and assign that negative naysayer on the project who compulsively thinks nothing will ever work. One, they find a lot of the defects, and two, when you fix these defects, you have an opportunity to demonstrate that they will work ;) - Michael Lee At 10:44 AM 6/5/98 -0400, you wrote: >bruce.levkoff@cytyc.com (Levkoff, Bruce) writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >As we enter the software validation phase of our project, we are finding >that the most numerous class of errors with our modeling involve >inadvertently omitted transitions and unexpected transitions (events >sent even though the target was not in an appropriate state). > >Although we did review the models, it is clear our review process did >not catch a variety of mistakes. I would like to hear of people's >experience with analysis reviews and the techniques used (e.g, scenario >development) and documents that were involved. Any comment on the >perceived success of such reviews would also be appreciated. > >Bruce > > >Bruce Levkoff >Principal Software Engineer >Cytyc Corporation >85 Swanson Rd. >Boxborough,MA 01719 >(P) 978-266-3033 (F) 978-635-1033 > -------------------------------- M O D E L I N T E G R A T I O N Model Based Software Development 500 Botany Court Foster City, CA 94404 mike@modelint.com 650-341-2544(v) 650-571-8483(f) --------------------------------- "Paul Higham" writes to shlaer-mellor-users: -------------------------------------------------------------------- If I understand the situation correctly, it would seem to me that having the state transition tables corresponding to the state models in hand when doing a review would help. You need to do the following rather tedious check: for each active object { for each state, i.e., each row in the state transition table { for each event, i.e., each column in the state transition table { check that an explicit decision has been made regarding the transistion for this event /* The trickiest part of this is to decide between "event ignored" and "event can't happen". Usually the other events are already in the state model graph and what you intuitively expect to happen. */ } } } Hope this helps, Paul Higham NORTEL paulh@nortel.ca In message "(SMU) Reviewing object models" sent on Jun05, shlaer-mellor- users@projtech.com writes: >bruce.levkoff@cytyc.com (Levkoff, Bruce) writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >As we enter the software validation phase of our project, we are finding >that the most numerous class of errors with our modeling involve >inadvertently omitted transitions and unexpected transitions (events >sent even though the target was not in an appropriate state). > >Although we did review the models, it is clear our review process did >not catch a variety of mistakes. I would like to hear of people's >experience with analysis reviews and the techniques used (e.g, scenario >development) and documents that were involved. Any comment on the >perceived success of such reviews would also be appreciated. > >Bruce > > >Bruce Levkoff >Principal Software Engineer >Cytyc Corporation >85 Swanson Rd. >Boxborough,MA 01719 >(P) 978-266-3033 (F) 978-635-1033 > > "Stephen R. Tockey" writes to shlaer-mellor-users: -------------------------------------------------------------------- > bruce.levkoff@cytyc.com (Levkoff, Bruce) writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > As we enter the software validation phase of our project, we are finding > that the most numerous class of errors with our modeling involve > inadvertently omitted transitions and unexpected transitions (events > sent even though the target was not in an appropriate state). > > Although we did review the models, it is clear our review process did > not catch a variety of mistakes. I would like to hear of people's > experience with analysis reviews and the techniques used (e.g, scenario > development) and documents that were involved. Any comment on the > perceived success of such reviews would also be appreciated. My suggestion is to apply a bit of rudimentary statistical process control, starting with your model review process. See, for example: Michael Fagan, Advances in Software Inspections, IEEE Transactions on Software Engineering, July, 1986 RG Mays, CL Jones, GJ Holloway, DP Studinski, Experience With Defect Prevention, IBM Systems Journal, Vol 29, No 1, 1990. The point here is that since you have measured and determined the largest source of errors, you can look at ways of changing the process to eliminate those errors (or uncover them earlier). One way of doing that is to base your reviews on "Checklists". I don't know if you already use checklists, but if not, you might want to start. If you are already using checklists, this suggests that your checklists could probably use some improvement. Good sources for starting to develop checklists (or improving existing checklists) are: SS Brilliant, JC Knight, NG Leveson, Analysis of Faults in an N-Version Software Experiment, IEEE Transactions on Software Engineering, Vol SE-16, No 2, February, 1990. Robyn Lutz, Targeting Safety-related Errors During Software Requirements Analysis, Journal of Systems Software, Vol 34, 1996. Both of these articles talk about common software defects and explicit examination of these common defects should make your reviews much more effective (i.e., include the items that they say are common errors as items in your checklist). As the reviews become more effective, the developers will usually begin using the checklists in the process of developing the models instead of waiting for the reviews. Then, be sure to continue taking data about your reviews and validation processes. Look at how many of each class of error are being found in inspections. Look at how many of each class of error are sneaking through the inspections and get found later (either in verification or in the field). * Classes of errors where large numbers are found in verification or in the field suggests that the checklists ought to be modified to make sure those kinds of errors get looked for during the reviews. * Classes of errors where small numbers are found in verification or in the field suggests that maybe the mechanism for preventing those from reaching the field is working well. If the consequences of those errors is relatively small, it might be worthwhile not testing so carefully for those particular errors. * Classes of errors where large numbers are found in reviews suggests that maybe you want to tweak the development process to try to prevent those errors from ever occurring in the first place. * Classes of errors where very small numbers are found in reviews suggests that the development process does a reasonable job of preventing those errors so it might be worthwhile dropping them from the checklist (so that you may concentrate on more prevalent errors). Note that all of this is simple "continuous process improvement" techniques, but I have seen it work amazingly well in practice. Cheers, -- steve peterf@pathfindersol.com (Peter J. Fontana) writes to shlaer-mellor-users: -------------------------------------------------------------------- At 10:44 AM 6/5/98 -0400, shlaer-mellor-users@projtech.com wrote: >bruce.levkoff@cytyc.com (Levkoff, Bruce) writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >As we enter the software validation phase of our project, we are finding >that the most numerous class of errors with our modeling involve >inadvertently omitted transitions and unexpected transitions (events >sent even though the target was not in an appropriate state). Certainly the use of STTs and review of STDs with STTs is (as already mentioned) invaluable. We have a document "Reviewing OOA Work Products" that briefly outlines when and how to conduct such reviews. Drop me a line, and I'll email you a copy. Even given adequate reviews, many analysis errors are difficult to detect in a manual, static review. Over the last couple of years we've worked with some clients to develop a preparatory step to the state modeling phase called Scenario-Based Object Communication Modeling. Good scenario-based OCMs can really help to avoid many common analysis problems resulting from state model hacking. In addition to OCMs, we've identified a distinct 4th phase of OOA - Dynamic Verification (DV) - where you exercise the models a domain at a time (or in tight groups), and you test with a very specific scenario-based focus to verify the proper model behavior - essentially following your OCM work. You use an object level debugging environment to support the DV activity. (Common "simulation" environments provide some of the capabilities you need for this.) _______________________________________________________ Pathfinder Solutions Inc. www.pathfindersol.com | 888-OOA-PATH | | effective solutions for software engineering challenges| | Peter Fontana voice: +01 508-384-1392 | peterf@pathfindersol.com fax: +01 508-384-7906 | _______________________________________________________| Neil Lang writes to shlaer-mellor-users: -------------------------------------------------------------------- > > bruce.levkoff@cytyc.com (Levkoff, Bruce) writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > As we enter the software validation phase of our project, we are finding > that the most numerous class of errors with our modeling involve > inadvertently omitted transitions and unexpected transitions (events > sent even though the target was not in an appropriate state). > > Although we did review the models, it is clear our review process did > not catch a variety of mistakes. I would like to hear of people's > experience with analysis reviews and the techniques used (e.g, scenario > development) and documents that were involved. Any comment on the > perceived success of such reviews would also be appreciated. > > Bruce > In previous posts Paul Higham and Mike Lee have pointed the need to carefully review the entries in the state transition table. I stongly support their comments but I'd also like to emphasize the importance of carefully and thoroughly building the STT in the first place. Modeling teams sometimes decide that they really don't need to build one. And when they do, they inadvertantly shortcircuit the process by simply transferring the transitions they've already created in the STD, and scurrying through the remaining cells parcelling them out to Ignore or Can't Happen with little thought there may in fact be an additional transition. Modelers should start with a completely empty STT and systematically work their way through the STT by considering each pairing of state and event with equal care and thought. In short, taking time to carefully build the STT in the first place will significantly reduce those number of those errors and will make the review process that Mike and Paul describe much easier and shorter. Neil Lang "Bryan K. Berg" writes to shlaer-mellor-users: -------------------------------------------------------------------- Bruce As someone whose job it is to review Shlaer-Mellor models = for the F-16 Avionics Software, I think that what is needed is an = independent review cycle. What happens to a developer, just like the = any other writer or creator, is that it is very difficult to see one's = own mistakes or omissions. The writer or developer has a tendency to = read what he/she meant rather than what he/she actually did. Now if an = engineering environment is used for generating the models and this = environment has consistency and accuracy checking, the environments = tools will be cheaper and easier. However, if a tool cannot be used for = whatever reason, I would consider using someone in my organization = (maybe from QC) who has been trained in SM but was not involved in the = development of the model to be reviewed. =20 Bryan Berg bberg@techreps.com "Stephen R. Tockey" writes to shlaer-mellor-users: -------------------------------------------------------------------- > "Bryan K. Berg" writes to shlaer-mellor-users: > -------------------------------------------------------------------- > ... The writer or developer has a tendency to > read what he/she meant rather than what he/she actually did. Too true. We once toyed around with the idea of requiring that the person who used a model could not be the same person who produced it (i.e., the analyst would be required to hand off the model to a designer who must be a different person than the original analyst). Our reasoning was based on the observation that if someone's ability to do their job well was impacted by the quality of the input they received, then that person was _highly motivated_ to do a very thorough job of reviewing said input. We never made a policy of it, but where it happened by chance the reviews were unusually thorough. It also appeared to have the side benefit of preventing the team members from becoming the-one-and-only-expert in a given piece of the system. Cheers, -- steve Luke Brennan writes to shlaer-mellor-users: -------------------------------------------------------------------- > > Certainly the use of STTs and review of STDs with STTs is (as already > mentioned) invaluable. We have a document "Reviewing OOA Work Products" > that briefly outlines when and how to conduct such reviews. Drop me a line, > and I'll email you a copy. > I would be interested in receiving a copy of your "Reviewing OOA Work Products" document. Thanks in advance! lfb lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Levkoff... > As we enter the software validation phase of our project, we are finding > that the most numerous class of errors with our modeling involve > inadvertently omitted transitions and unexpected transitions (events > sent even though the target was not in an appropriate state). > > Although we did review the models, it is clear our review process did > not catch a variety of mistakes. I would like to hear of people's > experience with analysis reviews and the techniques used (e.g, scenario > development) and documents that were involved. Any comment on the > perceived success of such reviews would also be appreciated. A number of people have replied with suggestions for improving the reviews that I agree with, particularly the need to rationalize all "not possible" and "ignore" entries in the STT. I am not sure what you mean by "validation phase", but it you mean domain level simulation, then I would add one more useful trick: use cases. You can make a quick pass through your use cases prior to full simulation with the view of tracing events only. You can do this quickly if you ignore setting the data in the state actions and simply assume values consistent with the use case wherever there is conditional event generation. This is a form of simulation but it can be done manually fairly quickly once you have the discipline to pretty much ignore the ADFDs so it can be part of the review. It won't catch everything, but it will catch quite of few of the problems that you cited. You can even make adjustments to throw aysychronous events on the queue. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com "Michael M. Lee" writes to shlaer-mellor-users: -------------------------------------------------------------------- PLEASE NOTE: These questions are directed at Steve and/or Sally as authors of the "Bridges & Wormholes" paper. I thought the response would be of general interest, however, so I'm sending it via this group. 1) Though there is no example of it in the B&W paper, I see no reason why one shouldn't be able to associate both a synchronous and an asynchronous return wormhole with a single request wormhole. For example, I may want to request a periodic notification (requiring an asynch return wormhole) _and_ receive a data output that indicates if the "away" domain has the facilities to do this (requiring a synch return wormhole). So my first question is, can one do this? If not, why not? And then a related question: Is there any reason the OOA of OOA (Fig 6.1) doesn't model the association between the request and return wormholes? 2) The asynchronous return is described as "returning control via an external event" (e.g., pg 5, 1st bullet). Though it is never explicitly stated, I assume what's really going on here is that there is a split in the initiating thread of control where one thread returns control to the ADFD/SDFD that invoked the request (with a potential delay if data outputs are allowed) and the other continues in the "away" domain until the asynch return wormhole is invoked to "return control via an external event". Is this the correct interpretation? If so, I would suggest clarifying this as you do for the synchronous return wormhole with and without data outputs on pg 5. Thanks in advance for your consideration - Michael (building bridges again) Lee -------------------------------- M O D E L I N T E G R A T I O N Model Based Software Development 500 Botany Court Foster City, CA 94404 mike@modelint.com 650-341-2544(v) 650-571-8483(f) --------------------------------- lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lee, > PLEASE NOTE: > These questions are directed at Steve and/or Sally as authors of the > "Bridges & Wormholes" paper. I thought the response would be of general > interest, however, so I'm sending it via this group. As it happens Steve and Sally are usually lurkers rather than active participants here -- the forum is not intended to be a Q&A for Project Technologies. They normally only leap in when the advice/interpretations of the practitioners take a sharp left off the Path of Enlightenment. > 1) Though there is no example of it in the B&W paper, I see no reason > why one shouldn't be able to associate both a synchronous and an > asynchronous return wormhole with a single request wormhole. For > example, I may want to request a periodic notification (requiring an > asynch return wormhole) _and_ receive a data output that indicates if > the "away" domain has the facilities to do this (requiring a synch > return wormhole). > > So my first question is, can one do this? If not, why not? And then a > related question: Is there any reason the OOA of OOA (Fig 6.1) doesn't > model the association between the request and return wormholes? To answer the first question, I believe the answer is: Yes. Check Figs. 1.3 and 1.4 where there are two arrows returning from the domain on the right. The one that doubles back directly from the input event represents the synchronous return while the one returning from the bubble (a state action's process) is the asynchronous return. Thus you can have either or both types of returns after issuing a bridge event. As regards the third question, I think issue is view point. There is a relationship, but it is a subtype relationship. So I assume your question is about why subtyping is used rather than some other form of relationship. If you are in your original client domain, then you think of the entire communication as a request. Let's assume the return is asynchronous for the moment. There will actually be two wormholes: one to initiate the original request (WS) and one to receive the asynchronous response (RWS, that will place the asynchronous event on the queue). Though these are different wormholes, your view of them is that they are actually just different aspects of the same request. Hence the is-a relationship. For the synchronous case there may only be one wormhole on the ADFD, but conceptually there is still a second distinct return wormhole with different characteristics that needs to be described. This distinction is only necessary because the underlying architectural mechanism is not defined in the OOA and the methodology does not want to be bound to a specific implementation such as a function return. Therefore even a synchronous return is treated as a separate entity in the OOA of OOA. One way to think about this, which has freed me from migraines, is that a synchronous wormhole in the ADFD could be viewed as a subtype migration in the OOA of OOA when it moves from Send to Receive. > 2) The asynchronous return is described as "returning control via an > external event" (e.g., pg 5, 1st bullet). Though it is never explicitly > stated, I assume what's really going on here is that there is a split in > the initiating thread of control where one thread returns control to the > ADFD/SDFD that invoked the request (with a potential delay if data > outputs are allowed) and the other continues in the "away" domain until > the asynch return wormhole is invoked to "return control via an external > event". > > Is this the correct interpretation? If so, I would suggest clarifying > this as you do for the synchronous return wormhole with and without data > outputs on pg 5. I believe this is correct, but "thread" carries some baggage with it. An asynchronous return wormhole places an event on the domain's event queue and the action that initiated the request continues execution until it completes. Whether there is a separate thread depends upon the interpretation of time in the domain. If the synchronous view of time is used, the return event cannot be processed until the action completes (i.e., two domain actions cannot execute at the same time). There would be no new thread -- when the action completed, control would return to the queue manager so the return event could be processed sequentially. In the asynchronous view of time, the return event could trigger execution of another action before the initiating action completed. This would be a second processing thread. While clarification might be helpful, I think this is really an issue that follows from the basic S-M definition of how events and action executions are handled. This has not changed with the formalization of wormholes. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- While lurking on the OTUG forum I came across something that might be of interest here. There is a fellow named Jaworski who has developed a system for describing notations in an objective and unambiguous way. He refers to it as a Relationship Oriented approach. The basic idea is that he defines the concepts behind the notational artifacts and then builds a fancy relationship diagram that connects the concept dots using rigorous set and graph theory rules. His goal is to define a meta language for repositories that would allow transfer of models between different notational systems, similar to things like CDIF for graphics. I find the papers on his web site interesting. In particular, I think this approach might be utilized by CASE vendors to provide enhanced checking of models. [Once the metamodel of the notation is in place it becomes easy to match a specific application model against it.] It might also be useful as a demonstration of internal consistency of notation in any upcoming Methodology Wars. I think it could be used for formalizing RD, since RD involves mapping concepts from dissimilar contexts (OOA of OOA, implementation mechanisms, translation rules, etc.). I could even see it being used for automatic generation of bridges by providing a high level mapping of both domains' internals. If anyone is interested, the URL is http://alcor.concordia.ca/~jaworski/webmap.html. Be forewarned that the internal links in the Concordia sites don't work for outsiders, so to get at details you may have to do things the hard way (i.e., to get the figures for the "Notational Technology" paper). These can be found at http://www.cs.concordia.ca/~teaching/comp457/papers/atw96, which has a .zip file with paper and figures and there are more figures at ".../papers/w paper". The Experimental Web Viewer uses Javascript and is reputed to be pretty slow (I couldn't view it because we turn off Javascript to avoid system hangs due to incompatibilities between MS and Sun versions). -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com "Michael M. Lee" writes to shlaer-mellor-users: -------------------------------------------------------------------- At 10:59 AM 6/19/98 -0400, you wrote: >lahman writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >Responding to Lee, > >> PLEASE NOTE: >> These questions are directed at Steve and/or Sally as authors of the >> "Bridges & Wormholes" paper. I thought the response would be of general >> interest, however, so I'm sending it via this group. > >As it happens Steve and Sally are usually lurkers rather than active >participants here -- the forum is not intended to be a Q&A for Project >Technologies. They normally only leap in when the advice/interpretations of the >practitioners take a sharp left off the Path of Enlightenment. > >> 1) Though there is no example of it in the B&W paper, I see no reason >> why one shouldn't be able to associate both a synchronous and an >> asynchronous return wormhole with a single request wormhole. For >> example, I may want to request a periodic notification (requiring an >> asynch return wormhole) _and_ receive a data output that indicates if >> the "away" domain has the facilities to do this (requiring a synch >> return wormhole). >> >> So my first question is, can one do this? If not, why not? And then a >> related question: Is there any reason the OOA of OOA (Fig 6.1) doesn't >> model the association between the request and return wormholes? > >To answer the first question, I believe the answer is: Yes. Check Figs. 1.3 and >1.4 where there are two arrows returning from the domain on the right. The one >that doubles back directly from the input event represents the synchronous >return while the one returning from the bubble (a state action's process) is the >asynchronous return. Thus you can have either or both types of returns after >issuing a bridge event. Yes, the pictures do say this. Then I would assume that there would be two return wormholes associated with the initial request wormhole: one with a return coordinate and one with a transfer vector -- is this your reading as well? I say this because on pg 9, 6. Specifying a Return Wormhole, the 4th bullet allows the return to have either a transfer vector _or_ a return coordinate, not both. > >As regards the third question, I think issue is view point. There is a >relationship, but it is a subtype relationship. So I assume your question is >about why subtyping is used rather than some other form of relationship. No, I think I understand the super/subtyping, see my next comment. >If you >are in your original client domain, then you think of the entire communication >as a request. Let's assume the return is asynchronous for the moment. There >will actually be two wormholes: one to initiate the original request (WS) and >one to receive the asynchronous response (RWS, that will place the asynchronous >event on the queue). Though these are different wormholes, your view of them is >that they are actually just different aspects of the same request. Hence the >is-a relationship. Yes, it was exactly this association between these "..different wormholes" that I was asking about. I believe there is an association between the request worm- hole (QWS) and the return wormhole(s) (RWS) that realize an invocation and return across a bridge, i.e., you can say that this request will use that return. If this is the case, it would be nice to see that association captured in a relationship in Fig 6.1. > >For the synchronous case there may only be one wormhole on the ADFD, but >conceptually there is still a second distinct return wormhole with different >characteristics that needs to be described. This distinction is only necessary >because the underlying architectural mechanism is not defined in the OOA and the >methodology does not want to be bound to a specific implementation such as a >function return. Therefore even a synchronous return is treated as a separate >entity in the OOA of OOA. Yep, all of that makes sense to me and is consistent with my interpretation. >One way to think about this, which has freed me from >migraines, is that a synchronous wormhole in the ADFD could be viewed as a >subtype migration in the OOA of OOA when it moves from Send to Receive. Here I lose you. I would assume that for a given bridge crossing, these is simultaneously both a Send (QWS) and a Receive (RWS) and hence I'm not sure why/how/when the migration occurs. > >> 2) The asynchronous return is described as "returning control via an >> external event" (e.g., pg 5, 1st bullet). Though it is never explicitly >> stated, I assume what's really going on here is that there is a split in >> the initiating thread of control where one thread returns control to the >> ADFD/SDFD that invoked the request (with a potential delay if data >> outputs are allowed) and the other continues in the "away" domain until >> the asynch return wormhole is invoked to "return control via an external >> event". >> >> Is this the correct interpretation? If so, I would suggest clarifying >> this as you do for the synchronous return wormhole with and without data >> outputs on pg 5. > >I believe this is correct, but "thread" carries some baggage with it. An >asynchronous return wormhole places an event on the domain's event queue and the >action that initiated the request continues execution until it completes. >Whether there is a separate thread depends upon the interpretation of time in >the domain. If the synchronous view of time is used, the return event cannot be >processed until the action completes (i.e., two domain actions cannot execute at >the same time). There would be no new thread -- when the action completed, >control would return to the queue manager so the return event could be processed >sequentially. In the asynchronous view of time, the return event could trigger >execution of another action before the initiating action completed. This would >be a second processing thread. Yes, thank you for your additional precision there. That is exactly what I was assuming. > >While clarification might be helpful, I think this is really an issue that >follows from the basic S-M definition of how events and action executions are >handled. This has not changed with the formalization of wormholes. I agree that it hasn't changed -- good work, once again, S&S. The point I was making is that while in analysis it is a virtue to say just one thing in one place, in education, it's a virtue to reinforce key concepts ;) Thank for your responses on all of this. I have found it helpful. - Michael > >-- >H. S. Lahman There is nothing wrong with me that >Teradyne/ATB could not be cured by a capful of Drano >179 Lincoln St. L51 >Boston, MA 02111-2473 >(Tel) (617)-422-3842 >(Fax) (617)-422-3100 >lahman@atb.teradyne.com > -------------------------------- M O D E L I N T E G R A T I O N Model Based Software Development 500 Botany Court Foster City, CA 94404 mike@modelint.com 650-341-2544(v) 650-571-8483(f) --------------------------------- lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lee... > Yes, the pictures do say this. Then I would assume that there would be two return > wormholes associated with the initial request wormhole: one with a return coordinate > and one with a transfer vector -- is this your reading as well? I say this because > on pg 9, 6. Specifying a Return Wormhole, the 4th bullet allows the return to have > either a transfer vector _or_ a return coordinate, not both. Yes, I interpret that there would be two wormholes -- hence the subtyping in the OOA of OOA. But I read it to say that the placeholder in the OOA for defining both is the original request wormhole. Clearly the synchronous return is via that request, per Figs. 1.1 and 3.1, but the asynchronous return is magically converted to an event. The only placeholder for defining that transfer vector conversion is the original request wormhole. [I am belaboring this placeholder idea (read: concrete instance of OOA of OOA abstraction) because it is relevant below.] > Yes, it was exactly this association between these "..different wormholes" that > I was asking about. I believe there is an association between the request worm- > hole (QWS) and the return wormhole(s) (RWS) that realize an invocation and > return across a bridge, i.e., you can say that this request will use that return. > If this is the case, it would be nice to see that association captured in a > relationship in Fig 6.1. > >One way to think about this, which has freed me from > >migraines, is that a synchronous wormhole in the ADFD could be viewed as a > > >subtype migration in the OOA of OOA when it moves from Send to Receive. > > Here I lose you. I would assume that for a given bridge crossing, these is > simultaneously both a Send (QWS) and a Receive (RWS) and hence I'm not sure > why/how/when the migration occurs. My thought is that the OOA has only one placeholder wormhole -- the original request wormhole in the ADFD -- that defines the wormhole and corresponds to QWS in the OOA of OOA. [I still have no clue what the "Q" means, but that's another worry.] When the original request is executed the subtype is WS to send the message to the Away domain. As soon as that message is sent, the subtype migrates to a RWS -> SRWS to receive the synchronous return message from Away. When the synchronous return message is received and dumped back into the action, it then migrates to a RWS -> ARWS to await the asynchronous return message and convert it to an event via the TV. This migration scheme holds up if one assumes a pure message based interface for the bridge abstraction. That is, all the arrows in Fig 1.3 are discrete messages transferred between domains. In practice the translation is very likely to short cut this with things like function calls so that WS and SRWS become the same. However, I think if one assumes the message paradigm is a higher level of abstraction, then there is no need for WS and RWS to exist at the same time in the OOA of OOA. If so, then the migration view is valid. If subtype migration is valid, then I don't see a pressing need for an additional relationship. I do agree that if one does not buy the subtype migration, then another association is probably justified. In fact, I think I would prefer that RWS was not a subtype of QWS at all in that situation -- render unto the Request the things requested and render unto the Return the things that are returned, etc. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com "Lynch, Chris D. SDX" writes to shlaer-mellor-users: -------------------------------------------------------------------- > (Responding to Lahman responding to Lee) > > > > > >> 2) The asynchronous return is described as "returning control via > an > >> external event" (e.g., pg 5, 1st bullet). Though it is never > explicitly > >> stated, I assume what's really going on here is that there is a > split in > >> the initiating thread of control where one thread returns control > to the > >> ADFD/SDFD that invoked the request (with a potential delay if data > >> outputs are allowed) and the other continues in the "away" domain > until > >> the asynch return wormhole is invoked to "return control via an > external > >> event". > >> > >> Is this the correct interpretation? If so, I would suggest > clarifying > >> this as you do for the synchronous return wormhole with and without > data > >> outputs on pg 5. > > > >I believe this is correct, but "thread" carries some baggage with it. > An > >asynchronous return wormhole places an event on the domain's event > queue and the > >action that initiated the request continues execution until it > completes. > >Whether there is a separate thread depends upon the interpretation of > time in > >the domain. If the synchronous view of time is used, the return > event cannot be > >processed until the action completes (i.e., two domain actions cannot > execute at > >the same time). There would be no new thread -- when the action > completed, > >control would return to the queue manager so the return event could > be processed > >sequentially. In the asynchronous view of time, the return event > could trigger > >execution of another action before the initiating action completed. > This would > >be a second processing thread. > > > One caveat for the benefit of new architects: the sequence of > events in the second-to-last sentence can break the SMOOA > event-processing rules if the initiating action and the action caused > by the asynchronous return event ("another action") are in the same > state machine (i.e. object/instance pair). (The rule being the one > about state machines not processing a new event until finished with > the previous one.) The pattern of returning an event to the > initiating machine has been a fairly common pattern in models I have > seen. Therefore, even in the simultaneous view of time, the action > precipitated by the asynch return event may need to be interleaved > with the initiating action and (architecturally speaking) be executed > in the same processor thread. > Just quibbling, - Chris > ------------------------------------------- > Chris Lynch > Abbott Ambulatory Infusion Systems > San Diego, Ca LYNCHCD@HPD.ABBOTT.COM > ------------------------------------------- > > "Dana Simonson" writes to shlaer-mellor-users: -------------------------------------------------------------------- Directed at Pathfinder et. al. who use direct translation from graphical models: >From the client side, are wormholes represented as a single bubble with a data and or event flow output or is the event flow left off and the event allowed to magically occur as shown in Fig. 7.1 of "Bridges and Wormholes"? How are transfer vectors represented in the server domain? lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lynch... > One caveat for the benefit of new architects: the sequence of > events in the second-to-last sentence can break the SMOOA > event-processing rules if the initiating action and the action caused > by the asynchronous return event ("another action") are in the same > state machine (i.e. object/instance pair). (The rule being the one > about state machines not processing a new event until finished with > the previous one.) The pattern of returning an event to the > initiating machine has been a fairly common pattern in models I have > seen. Therefore, even in the simultaneous view of time, the action > precipitated by the asynch return event may need to be interleaved > with the initiating action and (architecturally speaking) be executed > in the same processor thread. > > Just quibbling, I quibbled first! How sad that the forum slows down a bit and we are quickly reduced to quibbling quibbles. I think I'm going to have go say something rude on OTUG. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com 'archive.9807' -- tristan.pye@aeroint.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Hello all, I'm having trouble modelling the following type of construct. A<----->>B ^ ^ ^ ^ | | | | v v C<----->>D That all looks simple enough, but unfortunately B must be related to the same C via both A and D. A and D have no common identifiers, so there are no collapsed referentials, which scuppers any attempt to enforce it that way. Is it possible to enforce this on the OIM, or does B have to be intelligent enough in its action language to stop it referencing two different Cs? Any help would be appreciated (but please be gentle - I'm new to the modelling game!) Thanks, Tristan. -------------------------------- Tristan Pye Aerosystems International www.aeroint.com +44 (0)1935 443103 tristan.pye@aeroint.com Thursday 16 July 1998, 4:55 pm -------------------------------- Neil Lang writes to shlaer-mellor-users: -------------------------------------------------------------------- > > tristan.pye@aeroint.com writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Hello all, > > I'm having trouble modelling the following type of construct. > > A<----->>B > ^ ^ > ^ ^ > | | > | | > v v > C<----->>D > > That all looks simple enough, but unfortunately B must be related to > the same C via both A and D. A and D have no common identifiers, so or in other words (mine actually) the OIM fragment above models a loop of dependent relationships, and needs to be formalized to capture the dependency of the relationships in the loop. In the OOA96 report Sally and I described three ways to do so: 1. constrained referentials 2. collapsed (or multiple) referentials 3. composed relationship > there are no collapsed referentials, which scuppers any attempt to > enforce it that way. > > Is it possible to enforce this on the OIM, or does B have to be > intelligent enough in its action language to stop it referencing two different Cs? I'd first examine the relationships between C-A and C-D to see if either of them can be composed. If that works then you have it built directly into the OIM. Otherwise you'll need to resort to tagging one of the relationships in the loop as constrained (by placing a 'c' after the referential) and inserting appropriate action to ensure that the constraint is met. > > Any help would be appreciated (but please be gentle - I'm new to the > modelling game!) A final thought. You state that A and D have no common identifiers but the real world is such that the loop is dependent. This usually occurs when instances of C partition the instances of the other objects into mutually exclusive subsets with no relationships between instances in different subsets. In such cases one can often craft compound identifiers for A,B, and D based on a (common) attribute -- the identifier of C. This may work for you. > > Thanks, > > Tristan. > > -------------------------------- > Tristan Pye > Aerosystems International > www.aeroint.com > +44 (0)1935 443103 > tristan.pye@aeroint.com > Thursday 16 July 1998, 4:55 pm > -------------------------------- Hope this helps Neil -- happy to see some activity in the eSMUG ---------------------------------------------------------------------- Neil Lang neillang@pacbell.net ---------------------------------------------------------------------- bgrim@ses.com (Bob Grim) writes to shlaer-mellor-users: -------------------------------------------------------------------- I am looking for a software engineer who has experience using Shlaer/Mellor (either as an analyst or architect). It would be a contract position. If you are interested in hearing about the job, please contact me via email or telephone. Thanks Bob Grim (512) 425-5196 bgrim@ses.com lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Pye... > A and D have no common identifiers, so > there are no collapsed referentials, which scuppers any attempt to > enforce it that way. > > Any help would be appreciated (but please be gentle - I'm new to the > modelling game!) Neil's answer is certainly the correct one, assuming there are no collapsed identifiers. But I would push back -- slightly -- on this assumption if you are truly new at this. (And because I happen to be a big fan of collapsible identifiers.) There are certainly cases where identifiers are really independent. For example, this is clearly the case if the identifier has concrete meaning that is the only way in the problem space to express instance uniqueness (e.g., an IRS Tax Payer ID in a tax preparation package). However, I submit that in many cases it is possible to construct collapsible compound identifiers. Moreover, I would argue that this is desirable to do wherever possible for two reasons. First, it brainlessly preserves referential integrity for loops; it Just Works. If, instead, you compose a relationship, you have to exercise some care in selecting which branch to compose. And if you use the constraint technique you have to do real work to implement it. The second reason is that it in many situations it can make maintenance easier. To demonstrate this I have to go through a long winded example to show that obvious identifiers with concrete semantics are not always the best ones to use. We build an Instrument that has Channel Cards and each Channel Card has Channels. As it happens, in the problem space (i.e., the user's view) Channels are identified by a channel number from 0...N without regard to what Channel Card they are on. In the real world the Channel Cards have a unique hardware identifier that only our Field Service people could love. But the user actually thinks of them as being interchangeable and identifies them by particular slots within the VXI cage where they live. We can dismiss the hardware identifier as irrelevant to the solving the user's problem (i.e., executing a test) since the user views the Channel Cards as interchangeable at the hardware level. Since there is only one cage, it is appealing to use the slot number as the channel card identifier. Finally, the Instrument has an identifier that is completely arbitrary since there is only one. It would appear that Channels Cards and Channels have identifiers that are meaningful in the problem space (slot number and absolute channel number) that are independent from each other and from the Instrument. As it happens there is a whole lot of other nonsense on our IM so there are lots of relational loops that pass through Instrument, Channel Card, and Channel. Rather than trying to deal with composing or constraining all these, we opted for a different identification scheme. We actually identify Channel Cards with the Instrument ID and the slot number. This is redundant, but it doesn't hurt since there is only one Instrument. We identify the Channel with Instrument ID, slot number, and relative pin number on the Channel Card. The absolute pin number becomes an attribute. At the OOA level this necessitates a search whenever the user's test gives us an absolute pin number in the bridge. But this still allows the unique Channel to be accessed (it is effectively an alternative identifier). In practice this costs us little or nothing in performance because we can colorize the Find to use a table lookup when we do the translation. [As it happens there are other reasons in the problem space for maintaining the mapping explicitly in the OOA as an entire Pinmap domain, but they aren't relevant here.] So why is this more complicated and less intuitive identification scheme more robust? The answer lies in the assumption that there is one Instrument in the system. In fact, a enhancement to the product line requires using two VXI card cages to get a larger channel count and, consequently, use of two Instruments. We would like the same device driver to control both, but the slot numbers and the absolute channel numbers repeat because they are numbered relative to a card cage. If we had used the intuitive, concrete semantics for the identifiers we would have had to perform surgery on the domain internals to support multiple instruments. But by using the compound identifiers, everything Just Works because the user's test *has* to tell us which Instrument to address due to VXI constraints. [We were prescient enough originally to make the Instrument ID be the VXI sessions handle, so even this Just Worked.] I am sure that there are counterexamples where doing something like the above might require more work than if you hadn't when maintenance is done. My assertion is that in my experience I haven't seen such a case in practice and we use collapsible identifiers almost everywhere. And I have seen cases like the above where it actually made later maintenance easier. So what's the point? I am suggesting that you might want to revisit the identification scheme to see if there isn't a way to collapse identifiers. There might not be a reasonable way to do so, but it is worth a check. [BTW, it is noteworthy that by changing the identification scheme, we did not introduce alien concepts that are not in the user's world view. The user knows that Channels are on particular Channel Cards and that the mapping between absolute channel number and slot/relative channel number is deterministic because the user must build a fixture to connect Channels to the UUT. Thus the user is well aware that the two identification schemes are interchangeable. If you do convert to a collapsible scheme you should be sure that you are not introducing artificial, non-problem space concepts into the models.] -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com John Hendrix writes to shlaer-mellor-users: -------------------------------------------------------------------- hi bob: I am not an experienced Shlaer/Mellor analyst so I probably am not the right candidate for your job. I use it for my home projects and find it very useful. On the job howver, I have yet to encounter a supervisor or manager that can stand to see his engineers doing anything but coding. I guess I must be working for the wrong companies. ;-) I also have yet to meet anyone who has ever known of Shlaer/Mellor being used anywhere. If you don't mind, would you let me know how many responses you get and give me an idea of how much (or little) S/M is used in the U.S.? thanks johnh Bob Grim wrote: > bgrim@ses.com (Bob Grim) writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > I am looking for a software engineer who has experience > using Shlaer/Mellor (either as an analyst or architect). > It would be a contract position. If you are interested > in hearing about the job, please contact me via email > or telephone. > > Thanks > > Bob Grim > (512) 425-5196 > bgrim@ses.com Allen Theobald writes to shlaer-mellor-users: -------------------------------------------------------------------- Hello! Would someone be so kind as to guide/instruct me as to translating S/M into C++? I am working from home today, and naturally left my copy of "Modeling the World in States" at the office. What I am specifically looking for is *rough* guidelines on translating relationships to C++. Any input from the list is welcome, plus pointers to any articles discussing same (I know I read 'em somewhere!) Thanks for your time. Kind Regards, Allen Theobald Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- Allen Theobald wrote: > Would someone be so kind as to guide/instruct me as to translating S/M > into C++? I am working from home today, and naturally left my copy of > "Modeling the World in States" at the office. > > What I am specifically looking for is *rough* guidelines on > translating relationships to C++. > > Any input from the list is welcome, plus pointers to any articles > discussing same (I know I read 'em somewhere!) This question has no one answer: there are many ways. Which one is best depends on the architectural constraints. If simplicity is your goal then you might try storing your objects in an STL map. Create a class (struct) for the object's identifier; and a compare function. Then use STL: typdef map > obj_lookup_table_t; Using a map is a very simple way of implementing relationships using referential attributes. To traverse from the referential object to the defining object you use the referential attributes as the subscript of the map lookup. To traverse the other way then you use the identifer of the source object and "find" a destination object instance with the same value for its referential attributes. Simple implementations can be also achieved with vector<> if you use the link/unlink style; But this requires the concept of a relationship in the architectural data structures. Not difficult: but slightly more thought is needed. It sounds like you're doing architectural hacking (or just messing about?) so don't worry about efficiency until you know that its necessary. Dave. -- Dave Whipp, Embedded Systems Group, Mitel Semiconductor, Plymouth, United Kingdom. PL6 7BQ. tel. +44 (0)1752 693277 mailto:david_whipp@mitel.com fax. +44 (0)1752 693306 http://www.gpsemi.com Michael.French@cambridge.simoco.com (Michael S. French x7174) writes to shlaer-mellor-users: -------------------------------------------------------------------- chow john, I can not speek for the states but the method is used extensively in the u.k. yours mikef Allen Theobald writes to shlaer-mellor-users: -------------------------------------------------------------------- Dave Whipp writes to shlaer-mellor-users, regarding my question on translating S/M to C++: -------------------------------------------------------------------- > This question has no one answer: there are many ways. Which one is > best depends on the architectural constraints. If simplicity is your > goal then you might try storing your objects in an STL map... Yes! Simplicity, and understanding, is my goal. > ...It sounds like you're doing architectural hacking (or just > messing about?). Messing about! :^) I'm curious how these translations can be done without the use of a tool. Nothing facilitates understanding, as doing by hand. Keep the suggestions coming! Kind Regards, Allen Theobald Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- Allen Theobald wrote: > Yes! Simplicity, and understanding, is my goal. > > > ...It sounds like you're doing architectural hacking (or just > > messing about?). > > Messing about! :^) I'm curious how these translations can be done > without the use of a tool. Nothing facilitates understanding, as > doing by hand. One thing that may aid understanding is to recognise that recursive design and OOA are two completely independent concepts. Shlaer Mellor provides a formalism (OOA) which provides a good starting point for translation. OOA says _nothing_ about the architecture of the implementation. It also says nothing about the architecture of the translator I tend to write a lot of translators. Most are very simple Perl scripts. Over time I have found that the architecture of these translators is the same for them all -- regardless of the problem-architecture and the solution-architecture. (I use the term "problem-architecture" to describe the structure of the description of a problem. This may be an OOA model; but it frequently is not.) Recursive Design should be a formalism for describing the process of translating a problem-architecture onto a solution-architecture. Most tool-based code generators are little more than compilers. I have not yet seen a description of the SM RD process. It seems to be a fairly bland concept of populating a set of code template by navigating an OOA model (this description says nothing about RD, IMHO). If you really want to understand how to translate models then it may be useful to investigate the translative paradigm instead of OOA. I have found that by translating from a number of different problem-architectures; with different ways of describing them; then I have gained a better understanding of techniques for translating OOA models. If you start your investigations with OOA then you're jumping in at the deep end! Dave. Not speaking for Mitel Semiconductor. -- Dave Whipp, Embedded Systems Group, Mitel Semiconductor, Plymouth, United Kingdom. PL6 7BQ. tel. +44 (0)1752 693277 mailto:david_whipp@mitel.com fax. +44 (0)1752 693306 http://www.gpsemi.com Allen Theobald writes to shlaer-mellor-users: -------------------------------------------------------------------- Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- > ...Recursive Design should be a formalism for describing the process > of translating a problem-architecture onto a > solution-architecture... I have not yet seen a description of the SM > RD process... Speaking of which, when is the RD book going to out? > If you really want to understand how to translate models then it may > be useful to investigate the translative paradigm instead of OOA... Care to send me off in the right direction (books, articles, etc.)? > ...If you start your investigations with OOA then you're jumping in > at the deep end! It certainly seems that way at times... :^) Kind Regards, Allen Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- >> If you really want to understand how to translate models then it may >> be useful to investigate the translative paradigm instead of OOA... > Care to send me off in the right direction (books, articles, etc.)? I can't think of any good ones - I've probably reinvented many wheels ovver the past few years; but experience is a good tutor. However, some pointers which might get you going in the right direction: 1. Learn perl It is a very good language for experimenting with tranlation: even if you later recode in C) 2. Don't over-translate It is often tempting to translate when you don't need to. For example, if you want a linked list then its very easy to generate a data structure with a "next" pointer and the node value; and to generate the code that uses it. If you are generating c++ then this would be a waste of time: STL does linked lists as part of the language. If you translate when you don't need to then you can't justify it over mainstream techniques. You make the translators too complex and thus obsure their real benefit. You also create maintanence problems and thus flakey systems. You may also get poor performance because the compiler may be optimised for built-in features. Its also harder to debug over-translated code. 3. Don't pollute the translator A common scenario: I write a problem description; design a solution architecture and write a script to map the former onto the latter. Then someone changes the problem in a way that breaks the translator. The easy way to cure this is to work out how to hack the generated code; and then to automate this hack within the translation script. This is a bad mistake and leads of exponential descrease in maintainability (trust me: I've done it. Its much worse than hacking a fix into a "normal" program). The correct way to handle to change in the problem is to fix the solution architecture; and then fix the translator. If you want to, you can hack the translator to implement the fixed architecture with just the "normal" penalty of hacked code. 4. Don't do software (yet)! It is my experience that programming (inc. design) is not very good for experiencing the importance of architecture. The size of program that can be developed by one person does not usually require a strong sense of architecture. On the other hand, if you try to develop even a small FPGA then architecture is vital. Read any book on hardware synthesis tools and the first chapter will talk about "RTL" architectures (a class of architectures when the behaviour is descibed as clocked registers connected by combinatorial logic). You'll soon be seeing data paths and control paths; multiplexors, tristate buses and state machines. A large number of the translators I write involve generating both VHDL (hardware desciption language) and assembler/C code that runs on it. This environment empesises the benifits of translation over traditional OOD software techniques - and thus encourages translation over OOD. Dave. -- Dave Whipp, Embedded Systems Group, Mitel Semiconductor, Plymouth, United Kingdom. PL6 7BQ. tel. +44 (0)1752 693277 mailto:david_whipp@mitel.com fax. +44 (0)1752 693306 http://www.gpsemi.com smf@cix.compulink.co.uk (Mike Finn) writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Michael S. French... > I can not speek for the states but the method is used extensively in the u.k. Can you say how extensively? I was told that during the recent Object Expo conference in London, S-M was not mentioned once. UML has had a clear field since Kennedy-Carter stopped exhibiting a few years ago. PT in Scottland are nowhere to be seen. I haven't seen any new magazine articles from the Major Gurus for quite a time. I could mention the new RD book... Just a general moan about the apparently zero profile of S-M in the UK. Mike -- Mike Finn Dark Matter | Email: smf@cix.co.uk Systems Ltd | Voice: +44 (0) 1483 755145 smf@cix.compulink.co.uk (Mike Finn) writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Dave Whipp... > One thing that may aid understanding is to recognise that > recursive design and OOA are two completely independent > concepts. Shlaer Mellor provides a formalism (OOA) which > provides a good starting point for translation. OOA says > _nothing_ about the architecture of the implementation. Agree. > It also says nothing about the architecture of the > translator I'm bit puzzled by this. I think of translation as a process and a translator as the means by which it can be performed automatically. Isn't this a little out of scope? Do I care how the translator has been developed? > I tend to write a lot of translators. Most are very simple > Perl scripts. Over time I have found that the architecture > of these translators is the same for them all -- regardless > of the problem-architecture and the solution-architecture. > (I use the term "problem-architecture" to describe the > structure of the description of a problem. This may be an > OOA model; but it frequently is not.) I've only needed to write one Translator, a dedicated program in C. Although the solution-architecture (Software Architecture?) has of course changed, the "problem-architecture" (OOA-of-OOA) has remained the same. > Recursive Design should be a formalism for describing > the process of translating a problem-architecture onto > a solution-architecture. Most tool-based code generators > are little more than compilers. I have not yet seen > a description of the SM RD process. I'm not sure what you're asking for here. Do you want a notation for describing the translation process? And which, as a sort of byproduct, also produces a Translator? This looks to me like a possible application for OOA/RD. Recursive or what!? > It seems to be > a fairly bland concept of populating a set of code > template by navigating an OOA model (this description > says nothing about RD, IMHO). At this time, I think it does boil down to the fairly bland concept of navigating the OOA-of-OOA and filling code templates. But how much excitement can you take? :-) > If you really want to understand how to translate models > then it may be useful to investigate the translative > paradigm instead of OOA. I have found that by translating > from a number of different problem-architectures; with > different ways of describing them; then I have gained > a better understanding of techniques for translating OOA > models. If you start your investigations with OOA then > you're jumping in at the deep end! The trouble is you have to map from one formalism to another. If not S-M OOA, what other formalism have you been using? I think having a good understanding of OOA (or another formalism) is a prerequisite to understanding the Translation Process. This is because the Translation Process must be described in terms of Metadata since it's not application specific. Mike -- Mike Finn Dark Matter | Email: smf@cix.co.uk Systems Ltd | Voice: +44 (0) 1483 755145 Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- Mike Finn wrote (responding to whipp): > > [OOA] also says nothing about the architecture of the > > translator > I'm bit puzzled by this. I think of translation as a process and a > translator as the means by which it can be performed automatically. > Isn't this a little out of scope? Do I care how the translator has > been developed? You are right. From the point of view of a system, you don't really care how the translator is developed (unless you are maintaining it). However, from a methodology point of view I think it is very important. OOA is big. Building a translator for OOA is a big investment. Since there are many ways of builing translators, it follows that a rigorous investigation of translation methods using OOA will be a huge undertaking. Fortunatly, OOA is not a prerequisit for translation. You can use much simpler starting points to investigate translation. Of course, you will need to see if the methods scale up to OOA translation; but a lot of work can be done without that burden. > > I tend to write a lot of translators. [...] > I've only needed to write one Translator, a dedicated program in C. > Although the solution-architecture (Software Architecture?) has of > course changed, the "problem-architecture" (OOA-of-OOA) has remained > the same. This is a difference between us. I tend to develop a large number of simple products. I generally codevelop the problem-description and the solution-architecture (and the translator that maps them, though this lags the other two). Some of the translators are re-used, some are one-offs. On those rare occasions when I describe the problem using an OOA model, I don't develop my own translator. OOA translators are big and we have an adequate translator that has been developed over a number of years by our architecture team. But that only produces C++ so is unusable for many of the problems I need to solve (where the product is hardware; or where we don't have a c++ compiler for the target processor; or ...). > > Recursive Design should be a formalism for describing > > the process of translating a problem-architecture onto > > a solution-architecture > I'm not sure what you're asking for here. Do you want a notation > for describing the translation process? And which, as a sort of > byproduct, also produces a Translator? This looks to me like a > possible application for OOA/RD. Recursive or what!? What I want is a solid foundation for building translators. What I have at the moment is informal techniques for the development of translators. Some things are clear: . The thing that is translated is the model, not the formalism. In some cases, a generic OOA translator is possible, but only if you don't use the population-data in the translation; and if you are happy with a generic architecture. . The back-end, code-generation, stage should do nothing more than template expansion. It can be tempting to write complex code generators that derive information on-the-fly. This approach causes maintenance problems. The templates should be written to navigate a well defined information structure. (However, accessor techniques can be used to avoid actually building the structure before populating the templates). The information structure that is navigated could be defined by an OOA model. . The front-end, parsing, stage should do nothing more than extract information from the problem description to populate a well defined information structure. This is the inverse of the code generation step. This step could be defined in terms of populating an OOA model. If the thing being translated is an OOA model that this step populates the OOA-of-OOA - this should be done by a CASE tool. However, It is more common, in my experience, that the source description is a text document (Possibly Word, FrameMaker or HTML). The "Implementation Specification" is a document written to interpret the initial requirement specification; and can usefully be written to be used as the source code for the translation. . The information structure used by the front end is independent of the information structure of the back-end. Don't attempt to "simplify" the system by polluting the problem-architecture with the solution-architecture (or visa-versa). >> [using OOA is jumping in at the deep end] > The trouble is you have to map from one formalism to another. If > not S-M OOA, what other formalism have you been using? A useful formalism might be an OOA model. Every OOA model provides a rigorous definition for the interpretation of population tables. So if you want to translate population data, then the OOA model provides the underlying formalism for that data. If you want the translate OOA; then OOA-of-OOA is the underlying formalism. But if I want to translate a bus-slave-register-set model then an OOA model provides the formalsim for describing the registers of the bus-slave (peripheral). [Aside: my bus-slave model has been used to produce both low-level C and assembler macros for accessing the peripheral register's bitfields; and for generating VHDL hardware description of the peripheral itself. This is a good example of a scenario where a generic OOA translator is completely irrelevent.] > I think having a good understanding of OOA (or another formalism) > is a prerequisite to understanding the Translation Process. This > is because the Translation Process must be described in terms of > Metadata since it's not application specific. I think I disagree with you. The problem description and the solution architecture are both application specific; and, in my experience, the translator that maps one to the other is also application specific. It is probably more correct to say that a translator is specific to a class of applications. The majority of applications can be placed in one of a small number of classes; and can thus reuse a general purpose architecture. However, you can always add value by "sub-classing" these generic application classes; and sometimes it is useful to start from scratch. Dave. -- Dave Whipp, Embedded Systems Group, Mitel Semiconductor, Plymouth, United Kingdom. PL6 7BQ. tel. +44 (0)1752 693277 mailto:david_whipp@mitel.com fax. +44 (0)1752 693306 http://www.gpsemi.com "Campbell D. McCausland" writes to shlaer-mellor-users: -------------------------------------------------------------------- >Responding to Finn >Can you say how extensively? We at PT International in Scotland have a database of several hundreds of active Shlaer-Mellor users in the UK. We have new Shlaer-Mellor users joining us all the time. These are _new_ users, not existing users contacting PTI for the first time. These are companies who are looking at UML, then looking at Shlaer-Mellor and Translation and concluding that it represents superior technology. >PT in Scottland are nowhere to be seen. This year, PTI has organised seminars for Shlaer-Mellor practitioners in London and Glasgow. We _sponsored_ OT98 in Oxford, providing sessions and a keynote and are providing no less than three talks for the Embedded Systems Conference Europe at Royal Ascot in September. >I haven't seen any new magazine articles from the Major Gurus for quite a time. Steve and Sally publish a steady stream of papers on the PT Web site. I hope this clarifies the considerable level of Shlaer-Mellor activity in the UK, Campbell McCausland "Paul Higham" writes to shlaer-mellor-users: -------------------------------------------------------------------- Being an ardent Shlaer-Mellor fan(atic), it is with great alarm that I read postings like that of Mike Finn, and it is with great relief that I read the enclosed. The discrepancy is something that should concern those that would like to see the Shlaer-Mellor method succeed. The silly phrase "Perception is Reality" comes to mind; if the design community perceives UML to be the next exciting, new technology in software engineering and also perceives OOA/RD as beyond the fringe, then the benefits of translation could go the way of BetaMax, wherein C++ classes continue to be "lovingly hand-crafted with care" and unreproducible memory violations continue to waste our time. In spite of the methodological fluff that surrounds it, UML is a notation, not a method. As such it is not incompatible with OOA. This much is fact but, however undisputed, it is nevertheless not well-known. I think that this is a problem. OOA is not in competition with UML, it is in competition with imprecision and lack of rigour. So why the perception? BetaMax lost to VHS because of marketing, I sincerely hope that OOA/RD soes not suffer the same fate. Is it possible to scale up without losing the quality? Can Steve Mellor be cloned to present in more conferences? Can others speak up in support at public forums? Some things I would like to see: * some more aggressive competition among Shlaer-Mellor CASE tool vendors * greater visibility in North America of existing CASE tools * articles in popular magazines (Time, Playboy, . . . ) on success stories of using OOA/RD in addition to the valuable PT articles * any other ideas? SMUGgers of the world unite, you have nothing to lose but excessive maintenance! Paul Higham Software Development Manager NORTEL paulh@nortel.ca In message " (SMU) Shlaer/Mellor Engineer" sent on Jul23, shlaer- mellor-users@projtech.com writes: >"Campbell D. McCausland" writes to shlaer- mellor-users: >------------------------------------------------------------------ -- > >>Responding to Finn > >>Can you say how extensively? >We at PT International in Scotland have a database of several hundreds >of active Shlaer-Mellor users in the UK. We have new Shlaer-Mellor >users joining us all the time. These are _new_ users, not existing >users contacting PTI for the first time. These are companies who are >looking at UML, then looking at Shlaer-Mellor and Translation and >concluding that it represents superior technology. > >>PT in Scottland are nowhere to be seen. >This year, PTI has organised seminars for Shlaer-Mellor practitioners >in London and Glasgow. We _sponsored_ OT98 in Oxford, providing sessions >and a keynote and are providing no less than three talks for the Embedded >Systems Conference Europe at Royal Ascot in September. > >>I haven't seen any new magazine articles from the Major Gurus for >quite a time. >Steve and Sally publish a steady stream of papers on the PT Web site. > >I hope this clarifies the considerable level of Shlaer-Mellor activity >in the UK, > >Campbell McCausland > > lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Theobald... > What I am specifically looking for is *rough* guidelines on > translating relationships to C++. Dave has already started you down the road of building translators, which is fine. However, I would point out that it is possible to translate an OOA to C++ completely manually (i.e., typing C++ for the application sources into the compiler IDE from the OOA diagrams in your lap). I would suggest that this approach might be useful if you are just starting out at getting familiar with translation. This has a couple of useful characteristics. First, it is an easy way to demonstrate that translation is feasible and not very complicated. This forum has had a number of conversations about subtle ways one can shoot oneself on the foot during translation and there is usually a certain level of magic associated with RD for newcomers. If you stick to a simple synchronous view of time and your application is not doing something really mischievous and performance isn't a major worry, then translation tends to be pretty simple. So a manual translation quickly removes the mystique around translation. Second, the sorts of things that lend themselves to architectural mechanisms become apparent pretty quickly. By the time you have done your second state machine you will probably be able to create a fill-in-the-object-name template that has 80% of the stuff you would ever need in the skeleton. This is a nice way station on the Path to Enlightenment about how one goes about creating architectures and building translators. The basic translation elements from OOA to C++ are fairly obvious: object to class; attribute to private data; state action to class public function; supertype to virtual base class; etc. As you opined, the relationships provide what trickiness there is. In the pure manual case the relationships can be handled lots of ways. First, consider the use of pointers... 1:1. Simplest is to just use a pointer private variable instead of a relational identifier. If your actions navigate in both directions, you want a back link pointer and cleanup code in the destructors. The pointers are assigned in constructors or, for conditional relationships, when the relationship is instantiated. 1:M. Simplest is to use a utility link list object to hold the pointers to the Ms with a pointer to the linked list on the 1 side. Again, if you need to go from M to 1, you also need back pointers and cleanup code in the destructor. You would create the linked list instance in the 1-side constructor. M:M. Simplest is to make one static associative object instance that contains a 2-entry table of pointers for the corresponding instances and sort it by the entries on the From side. (You can use two tables if you need to navigate from both sides.) This is the one that has the most options and is most likely to have performance problems. An even simpler approach (but a seriously slow one) is to use the identifiers literally. In this case you implement a static Find function for each class that will return a pointer to the instance given the identifiers. You also create a static table (or linked list) for the instance identifiers that the Find searches. The table is updated by the constructors and destructors. (If the object has 1:M or M:M relationships, you may also want a static FindNext.) If you obey the IM rules for relational identifier placement, this will Just Work and the infrastructure code will be exactly the same in every class. As I indicated above, you can write this C++ code directly from the OOA and I think it is a useful thing to do as a lesson in how translation works at the most elemental level. There are some arcane ways that things can go wrong (e.g., a static FindNext has problems in a simultaneous view of time), but they won't be relevant on a simple project and the simplistic approaches above provide good insight into the basic process. It also provides insight into the sorts of things one *wants* to do in an automated translator or in translation support tools -- which addresses Dave's worry about over translating. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Tracy Morgan writes to shlaer-mellor-users: -------------------------------------------------------------------- Like Campbell, I, too, would like to add some further examples of Shlaer-Mellor activity in the UK. As I'm sure most of you will be aware, Kennedy Carter has been involved with the Shlaer-Mellor Method now for well over 7 years. We too have an extensive database of active Shlaer-Mellor users in the UK, other parts of Europe and the US. We've also been organising & running for 4 years now, a Shlaer-Mellor User Group (more affectionately known as SMUG) Conference. The fifth conference is scheduled to take place in Cheltenham, England on the 15th and 16th of September. This is an annual, and unique, opportunity for people to hear from speakers, mainly from user organisations, all of whom have many years experience of OOA. Each year, without exception, the speakers have been happy to share their hard-won insights on a range of critical issues. The SMUG 98 programme is certainly lining itself up to be the best yet. Incorporating : - UML - The future of OOA/RD - Configurable code generation - Use case analysis. Organisations speaking at this year's conference include : Rover/BMW, Lucent Technologies, LucasVarity, Daimler Benz, Matra BAe, GEC, Mitel & GPT. If anyone wants any further information, please feel free to email me or look at KC's web site - www.kc.com. There's a lot of exciting stuff happening but I do agree that we do need to give the method a higher profile. The SMUG conference is just one way we try to achieve that. >"Campbell D. McCausland" writes to >shlaer-mellor-users: >-------------------------------------------------------------------- > >>Responding to Finn > >>Can you say how extensively? >We at PT International in Scotland have a database of several hundreds >of active Shlaer-Mellor users in the UK. We have new Shlaer-Mellor >users joining us all the time. These are _new_ users, not existing >users contacting PTI for the first time. These are companies who are >looking at UML, then looking at Shlaer-Mellor and Translation and >concluding that it represents superior technology. > >>PT in Scottland are nowhere to be seen. >This year, PTI has organised seminars for Shlaer-Mellor practitioners >in London and Glasgow. We _sponsored_ OT98 in Oxford, providing sessions >and a keynote and are providing no less than three talks for the Embedded >Systems Conference Europe at Royal Ascot in September. > >>I haven't seen any new magazine articles from the Major Gurus for >quite a time. >Steve and Sally publish a steady stream of papers on the PT Web site. > >I hope this clarifies the considerable level of Shlaer-Mellor activity >in the UK, > >Campbell McCausland *********************************************************************** BOOK YOUR PLACE ON THIS YEAR'S SMUG CONFERENCE NOW 15 - 16 September, Cheltenham. Please call or email for more details *********************************************************************** Tracy Morgan tel : +44 1483 483200 Kennedy Carter Ltd fax : +44 1483 483201 14 The Pines web : http://www.kc.com Broad Street email : tracy@kc.com Guildford GU3 3BH UK "We may not be Rational but we are Intelligent" ************************************************************************ ===================================================== BOOK YOU PLACE ON THIS YEAR'S SMUG CONFERENCE NOW 15th - 16th September, Cheltenham, U.K. Please call or e-mail sales@kc.com for more details ===================================================== Kennedy Carter, 14 The Pines, Broad Street, Guildford Surrey, GU3 3BH, U.K. Tel: (+44) 1483 483 200 Fax: (+44) 1483 483 201 Further Information: http://www.kc.com or info@kc.com ===================================================== lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Higham... > In spite of the methodological fluff that surrounds it, UML is a > notation, not a method. As such it is not incompatible with OOA. > This much is fact but, however undisputed, it is nevertheless not > well-known. I think that this is a problem. OOA is not in > competition with UML, it is in competition with imprecision and > lack of rigour. So why the perception? Though UML may be a notation, there are several methodologies associated with it. Also, Rational announced the Rational Unified Method (RUD) last month that is supposed to be tailored to UML. The reality is that there is an army of consultants introducing methodologies that are based upon UML. One can argue that it is silly for a notation to drive a methodology and that it is even sillier when the notation is primarily driven by the syntax of existing OOPLs. But I doubt that such arguments will have much affect on what is effectively becoming a juggernaut. In the '96-'97 debate over translation Steve got the Three Amigos to admit that Translation is a viable approach. Unfortunately this was a little noted nor long remembered victory. I have personal knowledge of two situations where S-M lost out to UML based methodologies. In both cases the primary basis for the decision was the perception that S-M was too rigorous so that it stifled creativity. So much for technical merit and unambiguous communication. The reality is that marketing strategies are far more important than technical merits nowadays. > Being an ardent Shlaer-Mellor fan(atic), it is with great alarm > that I read postings like that of Mike Finn, and it is with great > relief that I read the enclosed. > > The discrepancy is something that should concern those that would > like to see the Shlaer-Mellor method succeed. Currently internal pressure is mounting in our shop to abandon S-M. The perception is that S-M is not viable in the long term and that we will have major problems in the future as tool vendors leave the marketplace. The argument is that the more software we commit to the S-M CASE tools, the more difficult it will be to convert later. The corollary argument is that since the S-M notation is a subset of UML we could maintain the methodology while converting to UML based tools. But this is clearly a very slippery slope. The view that S-M is not viable in the long term is based upon several things: (1) There is a serious schism developing among the tool vendors since Steve and Sally allied themselves with Bridgepoint. As near as I can tell communications have essentially ceased between Steve and Sally and the other tool vendors. This will inevitably lead to variants in the methodology that will balkanize S-M into even small units. (2) Rational has been very effective at marketing UML and there is no question that it is dominating the marketplace. (3) S-M no longer has any visibility. There used to be articles in the industry rags concerning S-M, but not recently. This seems to be more of a problem with the consulting community -- Steve and Sally can't do it all and most UML based articles are mostly from consultants. (4) The Long Awaited RD book has not yet appeared. This is a major hole in the methodology. It is a little difficult to claim that S-M is a rigorous and unambiguous methodology when this area is not formally defined. It also aggravates the tool vendor schism because the vendors must supply their own solutions to fill the void and they will be understandably reluctant to back them out later. (5) There are no success stories. Every OO conference has several case study type presentations about how marvelously UML has worked for someone. When was the last time we saw such a case study from an S-M shop? (6) S-M has deliberately painted itself into the R-T/E niche despite the fact that 90+% of the world's software is in the MIS community. Unless S-M can break into the MIS market it will never gain the necessary critical mass of acceptance to make it viable. (7) The succession of mergers, reorganizations, layoffs, and relocations at PT suggest that things are not going smoothly in the company that is supposed to be providing leadership for the methodology. (8) There is no standards body to independently own and promote the methodology. While this is often a rather hollow ploy, it has become pretty much a checklist item to promote standards. Though the current attempt to get an executable notation based upon S-M incorporated in OMG is a good idea, we really need the aura of respectability that a standards body provides. > Some things I would like to see: > > * some more aggressive competition among Shlaer-Mellor CASE tool > vendors I have to disagree with this one. Right now I think it is more important to get them all on the same page. Because of the lack of definition around RD, all the vendors have some differences in the way the handle bridges in the OOA. For the same reason, one cannot buy an OTS architecture and use it with any CASE tool -- the architecture has to be bought from the same vendor that supplies the code generator and the code generator has to be bought from the same vendor that provides the OOA bubbles & arrows. I see the ability to plug & play as being the basis for true competition in the long term. I also see it as a highly marketable point for the overall methodology. Finally, I think balkanization of the methodology by the vendors must be prevented if S-M is to become viable. > * greater visibility in North America of existing CASE tools > > * articles in popular magazines (Time, Playboy, . . . ) on success > stories of using OOA/RD in addition to the valuable PT articles I certainly agree with this one. The spin I would add is that I think the consultants need to become the activists by pushing companies to coauthor papers with them. By putting in some effort to actually get the paper written and placed the consultant gets rewarded with the publicity, so it is to the consultant's advantage to be the champion. Testifying from my own experience, we have a fair amount of data and case studies, but we simply don't have time to go through the drill of writing and editing papers. I also believe that companies are going to have to be more public about their success stories. I believe the only effective counter to the UML marketing juggernaut is to get out front with the fact the S-M actually works. The basic marketing thought that needs to be conveyed is that rigor and consistency pay off in applications that work. > * any other ideas? Of course! Silly question. One thing to do is to match some of the marketing hype. Clearly object level reuse has failed and it is out of vogue. Now the silver bullet is component reuse. I think S-M is ideally suited to take advantage of this. The basic marketing pitch should be: (A) S-M recognized the problems with object reuse and chose not to even attempt to support it. [Implies deep understanding of What It's All About.] (B) S-M has provided component reuse from Day 1 in the form of domains and their rigorously defined interfaces. [Implies S-M was aware of the value of components and the methodology was designed around them long before the other methodologies started patching things to make them work.] With this pitch S-M indirectly becomes established as the leader in dealing with Interfaces and Components. It provides an excellent entree into the MIS community. As I indicated above, I think S-M has to get into the MIS marketplace. The main drawback to this is that it is littered with legacy code and almost any significant OO project will have to interface with that legacy code. The argument is that such interfaces require higher skill levels to design so doing this while introducing OO is an invitation to disaster. The problem is keeping the clients out of trouble as they attempt to actually design those interfaces to legacy code. This is where the tool vendors and consultants have to exercise some discipline. There has to be agreement that any potential client *must* buy into a package that includes a minimal amount of consulting along with the CASE tools. In addition there has to be lots of educational material -- preferably published in industry rags -- that identifies the pitfalls and the need for consulting. To make this work there has to be an overall strategy and significant cooperation among vendors and consultants. Fortunately the S-M community is small enough that this is not an unrealistic expectation. Finally, as I indicated above, I think it is imperative that we get the story out the S-M Just Works. Though publishing success stories is helpful, I don't think it will carry the day. What would be much more persuasive is a metric that defined the percent of successful projects among all S-M projects. If S-M is really as good as we think it is, that metric should be significantly higher than overall industry averages for successful projects. If so, that would be a dynamite marketing tool. The trick is obtaining forming the metric. To be credible there has to be a reasonable sample size. This means collecting consistent data from a large fraction of the existing S-M users. This is a nontrivial task and there are a number of potential problems (e.g., confidentiality). Coming from a TQM shop, my suggestion would be to form a QIT to accomplish this that consisted of a few interested individuals. The QIT would define the metric and the necessary data, develop a plan for collecting the data unintrusively, solicit the data from S-M users, and eventually publish it. Any volunteers? -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Carolyn Duby writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > I certainly agree with this one. The spin I would add is that I think the > consultants need to become the activists by pushing companies to coauthor papers > with them. By putting in some effort to actually get the paper written and > placed the consultant gets rewarded with the publicity, so it is to the > consultant's advantage to be the champion. Testifying from my own experience, > we have a fair amount of data and case studies, but we simply don't have time to > go through the drill of writing and editing papers. For the record, Pathfinder is doing its share of writing this year. I'll be presenting at C++ World in August and Embedded Systems in November. I also think the more exposure the better so since you mentioned it.... If you have a success story to share, I'm volunteering to collaborate with you. > The problem is keeping the clients out of trouble as they attempt to actually > design those interfaces to legacy code. This is where the tool vendors and > consultants have to exercise some discipline. There has to be agreement that > any potential client *must* buy into a package that includes a minimal > amount of consulting along with the CASE tools. In addition there has to be > lots of educational material -- preferably published in industry rags -- that > identifies the pitfalls and the need for consulting. To make this work there > has to be an overall strategy and significant cooperation among vendors and > consultants. Fortunately the S-M community is small enough that this is not an > unrealistic expectation. Integrating legacy code with models is a major concern for many of our clients and prospects. I've proposed this topic to the Embedded Systems East conference next spring. > The QIT would define the metric and the necessary data, > develop a plan for collecting the data unintrusively, solicit the data from S-M > users, and eventually publish it. Any volunteers? Sounds intriguing. Do you have any preliminary ideas? Carolyn -- ________________________________________________________ Pathfinder Solutions Inc. www.pathfindersol.com | 888-OOA-PATH | | effective solutions for software engineering challenges | | Carolyn Duby voice: +01 508-384-1392 | carolynd@pathfindersol.com fax: +01 508-384-7906 | ________________________________________________________| smf@cix.compulink.co.uk (Mike Finn) writes to shlaer-mellor-users: -------------------------------------------------------------------- Hello Campbell and welcome to E-Smug. > >Can you say how extensively? > We at PT International in Scotland have a database of several hundreds > of active Shlaer-Mellor users in the UK. We have new Shlaer-Mellor > users joining us all the time. These are _new_ users, not existing > users contacting PTI for the first time. These are companies who are > looking at UML, then looking at Shlaer-Mellor and Translation and > concluding that it represents superior technology. That's excellent. > >PT in Scottland are nowhere to be seen. > This year, PTI has organised seminars for Shlaer-Mellor practitioners > in London and Glasgow. We _sponsored_ OT98 in Oxford, providing sessions > and a keynote and are providing no less than three talks for the Embedded > Systems Conference Europe at Royal Ascot in September. Hmm, I forgot about the first two events. Perhaps my comment above was a little harsh (it was said in the context of Object Expo). I've only seen the Embedded Systems Conference mentioned on the PT Web site. > >I haven't seen any new magazine articles from the Major Gurus for > quite a time. > Steve and Sally publish a steady stream of papers on the PT Web site. Indeed they have published parts of their book. However, the average programmer/analyst is not that likely to find or fully understand them. What I would like to see are more papers like Steve used to write for JOOP and Object Magazine, aimed at getting people interested in OOA/RD. And not necessarily just for Embedded/Real-Time users. > I hope this clarifies the considerable level of Shlaer-Mellor activity > in the UK, Yep. I just wish I did not have to come across so much stuff about UML. Mike -- Mike Finn Dark Matter | Email: smf@cix.co.uk Systems Ltd | Voice: +44 (0) 1483 755145 lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Duby... > For the record, Pathfinder is doing its share of writing this year. I'll be presenting > at C++ World in August and Embedded Systems in November. I also think the > more exposure the better so since you mentioned it.... If you have a success > story to share, I'm volunteering to collaborate with you. Glad to hear it. I will contact you off-line. > Integrating legacy code with models is a major concern for many of our clients and > prospects. I've proposed this topic to the Embedded Systems East conference next > spring. Certainly a step in the right direction. But ES/E isn't exactly breaking out of the R-T/E market niche, is it? If S-M is going to be used in the MIS market, integrating legacy code is THE problem to deal with. > > The QIT would define the metric and the necessary data, > > develop a plan for collecting the data unintrusively, solicit the data from S-M > > users, and eventually publish it. Any volunteers? > > Sounds intriguing. Do you have any preliminary ideas? In the TQM/QIT world that would be jumping to Step 4. However, I do have some personal opinions. I think it should be simple -- perhaps simple counts of completed successes vs. total completed. It should probably be binned by project size because I think S-M is more scalable than other methodologies. The success rate should be a major selling point and is subject to fewer data ambiguities than things like defect rates and "maintainability". (Even simple binning by size is subject to ambiguities.) I also think companies can provide this data more easily than that for more complex measures. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com smf@cix.compulink.co.uk (Mike Finn) writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Dave Whipp... > > > [OOA] also says nothing about the architecture of the > > > translator > > I'm bit puzzled by this. I think of translation as a process and a > > translator as the means by which it can be performed automatically. > > Isn't this a little out of scope? Do I care how the translator has > > been developed? > You are right. From the point of view of a system, you don't > really care how the translator is developed (unless you are > maintaining it). However, from a methodology point of view I > think it is very important. > OOA is big. Building a translator for OOA is a big investment. Tell me about it! By the time you have a useful industrial strength Translation System with all the nice to have bits added in, like Coloring, Incremental Build Control and Data Type Selection as well as the OOA-of-OOA, the actual Translator, developed the Code Templates, and of course the Software Architecture to bind them together you really have made a huge investment. > Since there are many ways of builing translators, it follows that > a rigorous investigation of translation methods using OOA > will be a huge undertaking. Not sure I can agree with this. I only know of the one translation method: Template Expansion by Formalism Navigation. Are there any others? [Template Expansion - that's a good name for it - Thanks] > Fortunatly, OOA is not a prerequisit for translation. You can > use much simpler starting points to investigate translation. > Of course, you will need to see if the methods scale up to > OOA translation; but a lot of work can be done without that > burden. This is outside my experience since I have only ever worked with OOA translation. > > > I tend to write a lot of translators. [...] > > I've only needed to write one Translator, a dedicated program in C. > > Although the solution-architecture (Software Architecture?) has of > > course changed, the "problem-architecture" (OOA-of-OOA) has remained > > the same. > This is a difference between us. I tend to develop a large number > of simple products. I generally codevelop the problem-description > and the solution-architecture (and the translator that maps them, > though this lags the other two). Some of the translators are > re-used, some are one-offs. I'm interested to know why you have not standardized on using OOA as the problem-description. > On those rare occasions when I describe the problem using an OOA > model, I don't develop my own translator. OOA translators are > big and we have an adequate translator that has been developed > over a number of years by our architecture team. But that only > produces C++ so is unusable for many of the problems I need to > solve (where the product is hardware; or where we don't have > a c++ compiler for the target processor; or ...). Does your OOA translator only produce C++ because this is the language that is hard coded in to it or is it because code templates are only available in this language? > > > Recursive Design should be a formalism for describing > > > the process of translating a problem-architecture onto > > > a solution-architecture > > I'm not sure what you're asking for here. Do you want a notation > > for describing the translation process? And which, as a sort of > > byproduct, also produces a Translator? This looks to me like a > > possible application for OOA/RD. Recursive or what!? > What I want is a solid foundation for building translators. > What I have at the moment is informal techniques for the > development of translators. I agree it would be good to have a solid foundation for building a translator, but I'm failing to see why you don't always describe the problem using OOA and so can use the OOA Translator with your own custom templates. > Some things are clear: > . The thing that is translated is the model, not the formalism. Isn't this obvious? Or am I missing something here? > In some cases, a generic OOA translator is possible, but only > if you don't use the population-data in the translation; and > if you are happy with a generic architecture. Aren't all OOA Translators generic in the sense that they handle a valid OOA model? Is a generic architecture one designed to execute any OOA model? > . The back-end, code-generation, stage should do nothing more > than template expansion. Agree absolutely. And simply done by not allowing an explicit *if* directive in the Translator's Archetype Language. > It can be tempting to write complex code generators that derive > information on-the-fly. This approach causes maintenance > problems. The templates should be written to navigate a well > defined information structure. (However, accessor techniques > can be used to avoid actually building the structure before > populating the templates). > The information structure that is navigated could be defined > by an OOA model. If you do not mean the OOA-of-OOA then I have a problem with this bit. If you do this then your code templates must be designed to only navigate this particular model. Surely, effort is better spent by populating the OOA-of-OOA, then using a OOA Translator with custom templates for your solution-architecture. > . The front-end, parsing, stage should do nothing more than > extract information from the problem description to > populate a well defined information structure. > This is the inverse of the code generation step. This step > could be defined in terms of populating an OOA model. > If the thing being translated is an OOA model that this step > populates the OOA-of-OOA - this should be done by a CASE tool. > However, It is more common, in my experience, that the source > description is a text document (Possibly Word, FrameMaker or > HTML). The "Implementation Specification" is a document > written to interpret the initial requirement specification; > and can usefully be written to be used as the source code for > the translation. Have you ever tried populating an OOA-of-OOA by hand? I seem to remember that 3 objects was my limit! As you say, a CASE tool should be the source, but it can be a drawing package such as Visio. > . The information structure used by the front end is independent > of the information structure of the back-end. > Don't attempt to "simplify" the system by polluting the > problem-architecture with the solution-architecture (or > visa-versa). > >> [using OOA is jumping in at the deep end] > > The trouble is you have to map from one formalism to another. If > > not S-M OOA, what other formalism have you been using? > A useful formalism might be an OOA model. > Every OOA model provides a rigorous definition for the interpretation > of population tables. So if you want to translate population data, > then the OOA model provides the underlying formalism for that data. > If you want the translate OOA; then OOA-of-OOA is the underlying > formalism. But if I want to translate a bus-slave-register-set model > then an OOA model provides the formalsim for describing the registers > of the bus-slave (peripheral). > [Aside: my bus-slave model has been used to produce both low-level > C and assembler macros for accessing the peripheral register's > bitfields; and for generating VHDL hardware description of the > peripheral itself. This is a good example of a scenario where > a generic OOA translator is completely irrelevent.] It seems to me you are not going the extra step that will make your solution more generic. As I read it, you have an OOA model of the bus-slave-register-set. Why not use this OOA model to populate the OOA-of-OOA, use an OOA Translator with the appropriate templates to generate an application program that will read in the data that formerly populated your OOA model and either run the C code and macros directly or output VHDL. > > I think having a good understanding of OOA (or another formalism) > > is a prerequisite to understanding the Translation Process. This > > is because the Translation Process must be described in terms of > > Metadata since it's not application specific. > I think I disagree with you. The problem description and the solution > architecture are both application specific; and, in my experience, the > translator that maps one to the other is also application specific. This is the nub of the argument. Aren't we both using S-M OOA/RD because the method enables us to engineer general solutions to specific problems? Producing a problem-description as an OOA model puts it into *Standard Form*. Likewise, creating a solution- architecture in the form of a Software Architecture puts that into *Standard Form* and allows any OOA model to be executed. So only one Translator is required because the mapping is always between the two *Standard Forms*, neither of which is application specific. > It is probably more correct to say that a translator is specific to a > class of applications. The majority of applications can be placed > in one of a small number of classes; and can thus reuse a general > purpose architecture. However, you can always add value by > "sub-classing" these generic application classes; and sometimes it > is useful to start from scratch. Mike -- Mike Finn Dark Matter | Email: smf@cix.co.uk Systems Ltd | Voice: +44 (0) 1483 755145 smf@cix.compulink.co.uk (Mike Finn) writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Paul Higham... Some other things I would like to see: 1) A semi-permanent technical presence on comp.object to defend and promote S-M. 2) Development of a simplified form of S-M OOA. A version with the concurrent aspects removed to reduce the uncertainty of event generation/execution and data consistency. This would allow newbys an easier way in and also help out the MIS market. 3) Distribution of a Freeware S-M OOA/RD tool to include code generation by Translation. There does not seem to be much money to be made from analysis and design tools anyway. :-) 4) Perversely, we must convince the Three Amigos that Translation is a *non-viable* approach. :-) IMHO, it is only a matter of time before the UML/Elaborationists cotton on to the power Translation and then where will we all be? BTW, thanks to Lahman for a most interesting but disturbing analysis of the current situation. Mike -- Mike Finn Dark Matter | Email: smf@cix.co.uk Systems Ltd | Voice: +44 (0) 1483 755145 Subject: [Fwd: (SMU) S/M to C++ question...] Content-Type: multipart/mixed; boundary="------------A5507B60358379DC96866B4C" Sender: owner-shlaer-mellor-users@projtech.com Precedence: bulk Reply-To: shlaer-mellor-users@projtech.com Errors-To: owner-shlaer-mellor-users@projtech.com lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Finn... Just a couple of comments peripheral to the main line discussion... Dave has some interesting ideas about architectures that I am sure he will get into. > By the time you have a useful industrial strength Translation System > with all the nice to have bits added in, like Coloring, Incremental > Build Control and Data Type Selection as well as the OOA-of-OOA, the > actual Translator, developed the Code Templates, and of course the > Software Architecture to bind them together you really have made a > huge investment. True, but it should be a one-time investment. Ideally we should be able to buy OTS architectures that Just Work. And, indeed, we have a cottage industry of CASE vendors who provide this with varying degrees of success. I think the current state of the art of code generation has two problems: poor optimization in the generated code and a lack of interoperability (i.e., I can't use vendor A's architecture and vendor B's code generator). > Not sure I can agree with this. I only know of the one translation > method: Template Expansion by Formalism Navigation. Are there any > others? There is at least one tool that uses direct compilation (in conjunction with a library of architectural mechanisms). The main justification for templates is that they provide a very usable mechanism for the developer to modify what the code generator would do by default. > I agree it would be good to have a solid foundation for building a > translator, but I'm failing to see why you don't always describe the > problem using OOA and so can use the OOA Translator with your own > custom templates. I think the issue is a description of the architectural artifacts rather than the problem space features. To provide a mapping for the translation, one needs a formalism on both sides. The OOA provides it for the problem solution, but you need a similar description for the available implementation artifacts (e.g., linked lists, semaphores, queue managers, etc.). An OOA-of-Architecure, if you will. > > Some things are clear: > > > . The thing that is translated is the model, not the formalism. > > Isn't this obvious? Or am I missing something here? Perhaps, but in practice I think there is opportunity to go awry. For example, the OOA formalism utilizes create and delete accessors to formalize the bounds of life cycles. This is important for maintaining referential integrity in the implementation (and other things). However, the translation does not have to literally perform heap allocations/deallocations every time an instance is born or dies. For performance reasons it is often quite important to reuse the memory for instances' data without the overhead of heap operations. The translation is free to do this so long as it can also ensure referential integrity. What is being modeled is that instances have limited life and they can only be accessed in a limited time span during the execution. The use of create/delete accessors in the formalism describes this nicely, but the translation is not obligated to take it too literally. The translation just has to ensure the access limits are honored. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com peterf@pathfindersol.com (Peter J. Fontana) writes to shlaer-mellor-users: -------------------------------------------------------------------- At 10:53 AM 7/29/98 BST-1, shlaer-mellor-users@projtech.com wrote: >smf@cix.compulink.co.uk (Mike Finn) writes to shlaer-mellor-users: >-------------------------------------------------------------------- > 4) Perversely, we must convince the Three Amigos that Translation > is a *non-viable* approach. :-) IMHO, it is only a matter of > time before the UML/Elaborationists cotton on to the power > Translation and then where will we all be? *We* will be the UMLers doing translation. Like the "Peter's Principles" poster on my office wall says: "If you can't beat them, join them, then beat them." I'l guess in a few years most of us will be doing UML - the *right* way. _______________________________________________________ Pathfinder Solutions Inc. www.pathfindersol.com | 888-OOA-PATH | | effective solutions for software engineering challenges| | Peter Fontana voice: +01 508-384-1392 | peterf@pathfindersol.com fax: +01 508-384-7906 | _______________________________________________________| lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Finn... > 2) Development of a simplified form of S-M OOA. A version with > the concurrent aspects removed to reduce the uncertainty of > event generation/execution and data consistency. This would > allow newbys an easier way in and also help out the MIS market. I am not in favor of this. Concurrency issues are mostly a matter of being aware of the problems when creating state actions. The same result can be achieved by simply assuming a synchronous view of time and a synchronous architecture. This allows a carefree approach to building the models. In either case the downside is that bad habits could be developed that would be a problem when one later encountered a true concurrent system. It is generally a lot easier to learn new habits than to get rid of old ones. Besides, off the top of my head I can't think of anything that would be removed from the OOA that explicitly represents concurrency. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- [I am currently in the process of changing employer (and country!) So please don't expect any detailed real-life examples. Also, I apologise for the length of this post; I've had plenty of time to write it but have been too lazy to spend the time to reduce it.] Responding to mike Finn: > Not sure I can agree with this. I only know of the one > translation method: Template Expansion by Formalism > Navigation. Are there any others? I fear that you may be confusing translation with code generation. Template expansion is the easy bit. Defining the information model that you navigate during expansion is harder. Defining the mappng to populate that information is even harder; and is the real guts of translation. However, to get back to templates: One common technique is to write the templates using an "assume the data exists" approach; and then work backwards to define the data structures. This is fine for small-scale translation but doesn't scale. But does it need to? Using a purely "architectural-analysis" approach appears to have stronger foundations; but it can be hard to tie it to templates. An architectural anaysis is like doing a perfect design and thus can suffer from the old designer-programmer waterfall problem: when you finally do the templates, you have to go back and change the design (architecture). The correct approach probably lies somewhere between these two extremes. But where, and how to control the process, are questions that I wouldn't like to answer. The answer is probably somewhat project-specific. A more fundamental question is: what templates should be written? > Does your OOA translator only produce C++ because this is the > language that is hard coded in to it or is it because code > templates are only available in this language? The problem is not the language - its the architecture. The architecture is an object oriented design. If I want to synthesize my VHDL into a netlist (and thus produce real hardware) then I need an RTL architecture. It would be silly to attempt to write templates for RTL from an OOD architecture. (its like trying to write templates for a distributed implementation based on a single-task architecture.) > I'm interested to know why you have not standardized on using > OOA as the problem-description. Its not alweays appropriate. There is no magic bullet. I don't believe the propoganda that "An OOA model is a universal means for communication between 'expert' and software engineer" (even when they are the same person). Behind every OOA model there lies a set of natural-language documentation (e.g. the initial specification; technical notes, etc). I have found that the information contained in these is frequently suitable as input to a tranlation system; it populates an OOA model somewhere in the translation process. > I agree it would be good to have a solid foundation for > building a translator, but I'm failing to see why you don't > always describe the problem using OOA and so can use the > OOA Translator with your own custom templates. Only a simulator (or naive OOA code generator) would base its code templates on the OOA-of-OOA. Any other template system requires a different underlying model. This underlying model is the architecture. The most important part of a translation system is to map problem-description-architecture (possibly OOA-of-OOA) onto the solution-architecture. The solution architecture defines the concepts that will be in templates for code generation To put this in OOA terms: OOA-of-OOA is one domain; OOA-of-OOD is another. And OOA-of-RTL is yet another. (for "one domain" you might need to substitute "one set of domains") > > [Aside: my bus-slave model has been used to produce both > > low-level C and assembler macros for accessing the peripheral > > register's bitfields; and for generating VHDL hardware > > description of the peripheral itself. This is a good example > > of a scenario where a generic OOA translator is completely > > irrelevent.] > It seems to me you are not going the extra step that will make > your solution more generic. As I read it, you have an OOA > model of the bus-slave-register-set. Why not use this OOA > model to populate the OOA-of-OOA, use an OOA Translator with > the appropriate templates to generate an application program > that will read in the data that formerly populated your OOA > model and either run the C code and macros directly or output > VHDL. Because it would be utterly pointless to do so :-) Specifically: The bus-slave-register-set is an implementation concept. It defines the structure of an implementation; and thus defines the structure of the information that will be navigated during template expansion. The fact that I have an OOA model of this architectural concept enables me to use an OOA simulator (or code generator) to simulate the bus-slave at an architectural level. But when I want to implement the VHDL code (and the software API to it) then no further translation is required. I only need the final step: code generation. It is possible that some of the code that I produce will be used as the population files for subsequent code generation steps (see below). It is also possible that the population file for my bus-slave-register-set could be the result of a previous translation step. let me give a trivial (but realistic) example of a translation system. The problem is to maintain a set of integer constants in several different source languages (pre-ansi C header; c++ header; ARM assembler header and VHDL model). I'll solve it using an OOA model and a problem-specific set of templates. Perhaps you could explain how the use of a full blown OOA-of-OOA based translator would improve the solution... First, the OOA model. There is just one object, and it's passive: CONSTANT(*name : alphanumeric-text, value : hex-number) And lets have a population table: CONTROL_REG_ADDRESS E0001000 STATUS_REG_ADDRESS E0001004 CONTROL_REG_INIT_VALUE 00E0A0FF Now some simple templates (you should be able to work out the syntax): pre-ansi C: \: while (($key, $value) = each(%CONSTANT)) { #define $key 0x$value \: } c++: \: while (($key, $value) = each(%CONSTANT)) { const unsigned int $key = 0x$value; \: } asm: \: while (($key, $value) = each(%CONSTANT)) { $key: DCD 0x$value \: } vhdl: \: while (($key, $value) = each(%CONSTANT)) { CONSTANT $key : std_logic_vector(31 DOWNTO 0) := hex("$value"); \: } You will note that I haven't used OOA-of-OOA. This example demonstrates how translation can be usefully used to translate an OOA model rather than OOA-of-OOA. (However, I have cheated a little; The mapping of the population file into the data structures to be navigated will, in reality, need to do a little conditioning to ensure the direct substitution is valid for the templates. For example, for the VHDL template I would need to pad '$value' with leading zeros to ensure precisely 8 digits. Similarly, the '0x' prefix should be eliminated from the other templates. This is an interesting subject in its own right.) The importance of this to recursive design is quite simple. Whereas you, and many other people, take the top-down approach of attempting to translate OOA-of-OOA model; I use a bottom-up approach of producing simple tranlators and then using them as components in bigger translators. For example, take the population file I used above. It could easily be procuded as a template in a bus-slave code generator. Of course, I could miss out the intermediate population file template; and do the whole thing in one architecture; but why should I? More to the point: why shouldn't I - an exercise left to the reader. In taking the bottom-up approach, I produce a lot of translators whose architectures are described by OOA models. Most are more complex than the example above. The models are all solution-architectures. Thus the code generators (templates) follow the model, not the meta model (OOA-of-OOA). However, because they are OOA models, it may be possible to examine their (performance) characteristics using an OOA simulator or generic architecture. This, of course, requires a complete architecture with state models (Yet another interesting subject). As I said at the top of this posting: the code generation is the simple part of translation. Even producing the solution-architecture is quite easy. Mapping a problem-model onto an appropriate solution-architecture is the hard bit. Dave. _________________________________________________________ DO YOU YAHOO!? Get your free @yahoo.com address at http://mail.yahoo.com Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- responding to Lahman: > True, but it should be a one-time investment. Ideally we should be > able to buy OTS architectures that Just Work. And, indeed, we have a cottage > industry of CASE vendors who provide this with varying degrees of success. I > think the current state of the art of code generation has two problems: poor > optimization in the generated code and a lack of interoperability (i.e., I can't > use vendor A's architecture and vendor B's code generator). OTS code generators are, of course, benificial; but should acurrately be described as compilers of a high level language (OOA). I believe that the power of the translational approach goes beyond this - the translator is as much part of the project's IP as the application domain model. The two should be given equal status; and both developed with an eye on reuse. An OTS code generator is just large scale reuse of the translator part of the project. However, *use* is just as important as *reuse*. You should never be scared to start from scratch for some parts of a project. Dave. _________________________________________________________ DO YOU YAHOO!? Get your free @yahoo.com address at http://mail.yahoo.com lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > First, the OOA model. There is just one object, and it's > passive: > > CONSTANT(*name : alphanumeric-text, value : hex-number) > > And lets have a population table: > > CONTROL_REG_ADDRESS E0001000 > STATUS_REG_ADDRESS E0001004 > CONTROL_REG_INIT_VALUE 00E0A0FF > > Now some simple templates (you should be able to work out the syntax): > > pre-ansi C: > > \: while (($key, $value) = each(%CONSTANT)) { > #define $key 0x$value > \: } > > c++: > > \: while (($key, $value) = each(%CONSTANT)) { > const unsigned int $key = 0x$value; > \: } > > asm: > > \: while (($key, $value) = each(%CONSTANT)) { > $key: DCD 0x$value > \: } > > vhdl: > > \: while (($key, $value) = each(%CONSTANT)) { > CONSTANT $key : std_logic_vector(31 DOWNTO 0) := hex("$value"); > \: } > > You will note that I haven't used OOA-of-OOA. This example demonstrates > how translation can be usefully used to translate an OOA model rather > than OOA-of-OOA. > > The importance of this to recursive design is quite simple. Whereas > you, and many other people, take the top-down approach of attempting > to translate OOA-of-OOA model; I use a bottom-up approach of producing > simple tranlators and then using them as components in bigger > translators. I am not convinced that you are not translating an OOA-of-OOA. For example, you have already mapped the identifier attribute for "name" into a syntactic context. This only has meaning if you already have a mental OOA-of-OOA in mind that ensures that "alphanumeric-text" satisfies the whole packet of relational rules rather than, say, a semantic that indicates that "name" is an array. You may be starting at the bottom, but there is a very strong chain of mapping that allows you to assume that your low level construct will always Just Work as a type definition because no higher level construct will ever hand your detailed construct a "$key" and "$value" that define an array. Put another way, as soon as you move up a level of detail and select a more detailed construct from all possible constructs at the same level of detail, you have walked up a mapping link towards the OOA-of-OOA. How else would you know which detail constructs were valid for the tokens in hand when you put together a higher level construct? That is, that they satisfy the semantics of "$key" and "$value" and that a type definition is appropriate for the current level's context? The values of "$key" and "$value" come from the model but the context for selecting constructs comes from the mapping of the OOA-of-OOA. It seems to me that the discipline required for the bottom up approach is provided by the OOA-of-OOA mapping that you have already memorized. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > > True, but it should be a one-time investment. Ideally we > should be > > able to buy OTS architectures that Just Work. > > OTS code generators are, of course, benificial; but > should acurrately be described as compilers of a high > level language (OOA). I believe that the power of the > translational approach goes beyond this - the > translator is as much part of the project's IP as the > application domain model. I don't think we disagree here fundamentally. I certainly agree that code generators are basically compilers (if one is not too picky about the conventional distinction between compilers and translators). Regardless of how sophisticated and configurable the OTS architecture is, somebody still has to decide what the translation rules should be to guide the wedding of models to architecture. I agree that S-M translation does provide a powerful mechanism in this respect to customize the implementation. At the same time, it separates this implementation tweaking from the problem solution in a formal manner, which is even better. > The two should be given equal status; and both > developed with an eye on reuse. An OTS code generator > is just large scale reuse of the translator part > of the project. However, *use* is just as important > as *reuse*. You should never be scared to start from > scratch for some parts of a project. I don't disagree with this, either. I would just hope that as the tools get more sophisticated the need to do this should be greatly reduced. I started out writing programs on plugboards and it was such a character building experience (along with changing vacuum tubes) that I didn't do any software throughout the '60s. I also haven't written any Assembler code for nearly a decade and I don't miss that either. All things considered, I think I would prefer to just color in some translation rules rather than building an architecture or code generator whenever possible. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com smf@cix.compulink.co.uk (Mike Finn) writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lahman... > > 2) Development of a simplified form of S-M OOA. A version with > > the concurrent aspects removed to reduce the uncertainty of > > event generation/execution and data consistency. This would > > allow newbys an easier way in and also help out the MIS market. > I am not in favor of this. Concurrency issues are mostly a matter of being > aware of the problems when creating state actions. The same result can be > achieved by simply assuming a synchronous view of time and a synchronous > architecture. This allows a carefree approach to building the models. In > either case the downside is that bad habits could be developed that would be a > problem when one later encountered a true concurrent system. It is generally a > lot easier to learn new habits than to get rid of old ones. You are quite right to be sceptical and I agree with what you say. I should have known better than to mention this idea without giving a lot more detail. :-) > Besides, off the top of my head I can't think of anything that would be removed > from the OOA that explicitly represents concurrency. Just the opposite. I'm thinking of adding something to OOA to implicitly enforce sequencing, but to still retain the Clunk, Clunk, Click of events moving between state machines. I think this subject should be left to another time. I just need to perform a few experiments first... Mike -- Mike Finn Dark Matter | Email: smf@cix.co.uk Systems Ltd | Voice: +44 (0) 1483 755145 smf@cix.compulink.co.uk (Mike Finn) writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Dave Whipp... > [I am currently in the process of changing employer (and country!) Good luck in your new venture! > I fear that you may be confusing translation with code generation. > Template expansion is the easy bit. Defining the information model > that you navigate during expansion is harder. Defining the mappng > to populate that information is even harder; and is the real > guts of translation. However, to get back to templates: I think we have already covered this ground. > I don't believe the propoganda that "An OOA model is a universal > means for communication between 'expert' and software engineer" > (even when they are the same person). We agree about this. OOA Models are for Software Developers. If an 'expert' understands them, that's fine. If not, we draw another picture or explain it some other way. > Behind every OOA model there lies a set of natural-language > documentation (e.g. the initial specification; technical notes, > etc). I have found that the information contained in these > is frequently suitable as input to a tranlation system; it > populates an OOA model somewhere in the translation process. Are we eating the same food here? :-) A customer ITT or Requirements Specification Document (even when captured and engineered in a tool like RTM) will be in no form to use as the input to a translation system. This is what the OO Analysis is for. OK, maybe because you deal with low level hardware, your documents are suitable for some form of translation, but I think this is an exception. > > I agree it would be good to have a solid foundation for > > building a translator, but I'm failing to see why you don't > > always describe the problem using OOA and so can use the > > OOA Translator with your own custom templates. > Only a simulator (or naive OOA code generator) would base its > code templates on the OOA-of-OOA. Any other template system > requires a different underlying model. This underlying > model is the architecture. This rings a bell! Steve Mellor used to write for JOOP (A deeper look...). In that seminal paper of OCT-1994 he describes an Archetype Language and talks about Replacements: "The elements inside the angle brackets, such as attribute.type and attribute.name, are the names of attributes of objects in an OOA of the selected architecture." So Steve's underlying model, the information structure that is navigated, is an OOA-of-Architecture. Do you agree with Steve? [Aside: I had started work on my Translator before this paper came out. When I saw it I emailed Steve to let him know I used the OOA-of-OOA. I wonder what he now thinks?] I should make it clear that when I talk about the OOA-of-OOA in this sort of context, I really mean the Meta Model, of which, the OOA-of-OOA is the major part. > The most important part of a translation system is to map > problem-description-architecture (possibly OOA-of-OOA) onto > the solution-architecture. The solution architecture defines > the concepts that will be in templates for code generation I'm getting a bit confused by these hyphenated terms: solution-architecture, problem-architecture, problem-description and now problem-description-architecture. And also problem-model. > let me give a trivial (but realistic) example of a translation > system. The problem is to maintain a set of integer constants > in several different source languages (pre-ansi C header; > c++ header; ARM assembler header and VHDL model). I'll > solve it using an OOA model and a problem-specific set of > templates. Perhaps you could explain how the use of a > full blown OOA-of-OOA based translator would improve the > solution... OK, but you wont like it. :-) > First, the OOA model. There is just one object, and it's > passive: > CONSTANT(*name : alphanumeric-text, value : hex-number) This looks to me like a fragment of OOA-of-OOA in disguise. A merging of two objects: Object (Object id; Name) Attribute (Attribute id; Name) Replace the OOA model with one *active* object: Register (register id; address, value) The object has one state and it could be called Initialise. The action for this state sends an event carrying Address and Value to some PIO Service domain that writes the attributes to a file in the format required. The single event could be called Init register. > And lets have a population table: > CONTROL_REG_ADDRESS E0001000 > STATUS_REG_ADDRESS E0001004 > CONTROL_REG_INIT_VALUE 00E0A0FF This is the data, with which, we will initialise (at run time) our system (the Register object), ready for the event Init register. > Now some simple templates (you should be able to work out the syntax): > pre-ansi C: > \: while (($key, $value) = each(%CONSTANT)) { > #define $key 0x$value > \: } Only the standard OOA templates are required. When the system is initialised, the Register object instances are read in, each instance is sent the Init register event and out pops a file with the same list your system produced. > the templates. For example, for the VHDL template I would need to pad > '$value' with leading zeros to ensure precisely 8 digits. Similarly, > the '0x' prefix should be eliminated from the other templates. This > is an interesting subject in its own right.) This can be done in the Translator with formatting qualifiers for the replacement text or if it's a matter of selection, Coloring can help. Mike -- Mike Finn Dark Matter | Email: smf@cix.co.uk Systems Ltd | Voice: +44 (0) 1483 755145 'archive.9808' -- Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- responding to Mike Finn responding to me: > > I have found that the information contained in these > > is frequently suitable as input to a tranlation system; it > > populates an OOA model somewhere in the translation process. > > Are we eating the same food here? :-) A customer ITT or Requirements > Specification Document (even when captured and engineered in a tool > like RTM) will be in no form to use as the input to a translation > system. I am thinking more of information in "technical notes". In trying to understand a problem, we gather data in informal ways and write it down. Frequently this will include tables that attempt to express some important aspect of the problem. The information in technical notes may be expressed in an OOA model in one of two ways: it might form the actual model; or it might be used to populate that model. Once the "model" part of the information has been formally expressed in an OOA model; that model can be used as a formal basis for the interpretation of the "population" part. Scripts can then be written to translate the "natural expression" of the population data within the technical note into the population of the OOA model. This is generally limited to tables and lists. > I'm getting a bit confused by these hyphenated terms: > solution-architecture, problem-architecture, problem-description > and now problem-description-architecture. And also problem-model. I must apologise. It is always difficult to find a good name for something that isn't already overloaded with multiple other meanings that introduce confusion. The problem is compounded when I attempt to be non-specific to OOA. In terms of OOA: solution-architecture => OOA-of-Architecture problem-architecture => OOA model problem-model => OOA model + (static) population problem-description-architecture => OOA-of-OOA > This rings a bell! [...] > So Steve's underlying model, the information structure that is > navigated, is an OOA-of-Architecture. Do you agree with Steve? Yes. OOA-of-Architecture is precisely what is navigated. This is not OOA-of-OOA. But two quibbles: 1. the use of angle brackets for substitution is a bad idea. Most languages already use these symbols (C++ has at leat 4 distinct uses for them!). 2. No OOA-of-Architecture should ever contain objects such as "object" or "attribute". They are either an indication of domain pollution (from OOA-of-OOA) or are very confusing. So you should never see substitutions like "attribute.name" within a code generation template. Unfortunately, most of the recent papers from PT have concentrated on refining the OOA formalism rather than refining techniques for translation. [and then I presented an example - see previous posts for details) > > . Perhaps you could explain how the use of a > > full blown OOA-of-OOA based translator would improve the solution... > > OK, but you won't like it. :-) Actually, I do like it - it's a domain I know well. Its just that it doesn't do anything to solve the problem that I gave. (Your use of it demonstrates a mismatch of communication somewhere.) I said: "CONSTANT(*name : alphanumeric-text, value : hex-number)" You said: "Register (register id; address, value)" These are obviously two different domains. If you read my initial problem description, you'll see that I said that the problem was: "maintain a set of constants". I then presented a set of population information within wich you perceived some structure. The population data I presented was, in fact, derived in a previous translation step from a domain similar to your suggestion - but in being translated it loses the structure of its source and gains new structure from the architecture. The model you presented is a service domain. Its population is defined by application domains (i.e. UARTS, parallel ports, etc.) which use registers; thus the "registers" domain is a useful service -- this is described in my 1997 paper for the UK SMUG. However, my problem was a lower level architectural issue. I think that it is easy to argue that there are sources of constants other than register information. Your model does not address those constants. The requement was, essentially, to write out a set of "name := value" declarations in a number of languages for an arbitary set of constants. With my simple model, the templates take the form of "foreach constant, do its declaration" As soon as you impose higher level information on the model, the simplicity of the template is obscured. Compare: \: foreach $constant (@constants) { #define $name{$constant} $value{$constant} \: } with \: foreach $register (@registers) { #define \U$name{$register}\E_REG_ADDR $address{$register} #define \U$name{$register}\E_REG_INIT_VALUE $init_value{$register} \ } \: foreach $port (@ports) { #define \U$name{$port}\E_PORT_WIDTH $width{$port} \: } If there are multiple sources of constants then the template file for the constant declarations may get quite big (and complex). It might even need conditional clauses. You then need all this complexity for each template that uses the information. Having a model that is dedicated to the structure of the source code isolates the important feature (the set of constants). This model can be populated as mappings from multiple sources. It also makes it much easier to attach to colorations that specify the formatting of values within a template. Dave. _________________________________________________________ DO YOU YAHOO!? Get your free @yahoo.com address at http://mail.yahoo.com Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > I don't think we disagree here fundamentally. I certainly agree that code > generators are basically compilers (if one is not too picky about the > conventional distinction between compilers and translators). I think that, actually, we do disagree ... very slightly. I wish to argue that translators are not compilers; but that OTS translators are. To put it another way: compilers are a subset of translators. > All things considered, I think I would prefer to > just color in some translation rules rather than > building an architecture or code generator > whenever possible. This presupposes that translation is a heavy-weight process. Its not. A simple translator can be knocked up in half a day. However, if you want a fully featured, general purpose, OOA translator then it may take a bit longer. That's where the OTS translators are needed; and where, IMHO, the benefits of translation are diminished (but the benefits of OOA are enhanced). Dave. _________________________________________________________ DO YOU YAHOO!? Get your free @yahoo.com address at http://mail.yahoo.com smf@cix.compulink.co.uk (Mike Finn) writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Dave Whipp... > solution-architecture => OOA-of-Architecture > problem-architecture => OOA model > problem-model => OOA model + (static) population > problem-description-architecture => OOA-of-OOA Whoops! In OOA terms, I have been interpreting the solution-architecture to be the Software Architecture and the problem-architecture to be the OOA-of-OOA. I think the source of my confusion was when we said: DW> MF> DW> I tend to write a lot of translators. [...] DW> MF> I've only needed to write one Translator, a dedicated program in C. DW> MF> Although the solution-architecture (Software Architecture?) has of DW> MF> course changed, the "problem-architecture" (OOA-of-OOA) has remained DW> MF> the same. DW> This is a difference between us. I tend to develop a large number DW> of simple products. I generally codevelop the problem-description DW> and the solution-architecture (and the translator that maps them, DW> though this lags the other two). Some of the translators are DW> re-used, some are one-offs. Could you define a few more terms? IP => Internet Protocol ? ARM => Acorn RISC Machines ? VHDL => Very High-level Design Language ? problem-description => ? > Yes. OOA-of-Architecture is precisely what is navigated. This is > not OOA-of-OOA. > But two quibbles: > 1. the use of angle brackets for substitution is a bad idea. Most > languages already use these symbols (C++ has at leat 4 distinct > uses for them!). In practice, an Archetype Language needs a unique string to indicate the start of a substitution. > 2. No OOA-of-Architecture should ever contain objects such as > "object" or "attribute". Agreed. > They are either an indication of domain pollution (from OOA-of-OOA) > or are very confusing. They most certainly belong to the OOA-of-OOA domain. > So you should never see substitutions like "attribute.name" within > a code generation template. Since the templates I write navigate the OOA-of-OOA, these are exactly the substitutions I make use of! > Unfortunately, most of the recent papers from PT have concentrated > on refining the OOA formalism rather than refining techniques for > translation. Sadly true. More papers on Translation would be great. But for now you'll just have to buy BridgePoint. :-) > doesn't do anything to solve the problem that I gave. (Your use of it > demonstrates a mismatch of communication somewhere.) Stranger things have happened. :-) > I said: "CONSTANT(*name : alphanumeric-text, value : hex-number)" > You said: "Register (register id; address, value)" > These are obviously two different domains. If you read my initial > problem description, you'll see that I said that the problem was: > "maintain a set of constants". I then presented a set of > population information within wich you perceived some structure. The Application Domain addresses the problem. You said the problem was to maintain a set of integer constants [...]. You then presented an OOA model: > CONSTANT(*name : alphanumeric-text, value : hex-number) So I thought you were presenting the Application Domain, but in fact this was your solution-architecture (or OOA-of-Architecture model in OOA terms). I suggested that the domain containing the Register object was the Application Domain, but you say it's a Service Domain. So my question is: Why did you not start with the Application Domain? > The population data I presented was, in fact, derived in a previous > translation step from a domain similar to your suggestion - but in > being translated it loses the structure of its source and gains > new structure from the architecture. Hmm, for me, your idea of translation steps does complicate matters somewhat. It's something that I've not wanted to discuss. :-) > The model you presented is a service domain. Its population is defined by > application domains (i.e. UARTS, parallel ports, etc.) which use registers; > thus the "registers" domain is a useful service -- this is described in > my 1997 paper for the UK SMUG. However, my problem was a lower level > architectural issue. I think that it is easy to argue that there are > sources of constants other than register information. Your model does not > address those constants. [...] Mike -- Mike Finn Dark Matter | Email: smf@cix.co.uk Systems Ltd | Voice: +44 (0) 1483 755145 lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > > I don't think we disagree here fundamentally. I certainly agree > that code > > generators are basically compilers (if one is not too picky about the > > conventional distinction between compilers and translators). > > I think that, actually, we do disagree ... very > slightly. I wish to argue that translators are not > compilers; but that OTS translators are. > > To put it another way: compilers are a subset of > translators. I agree that a compiler is a special case of translator -- if one accepts that binary machine code is a language (which I do). Translators in general translate between one language and another. Since the output of most OTS translators is another higher level language, I don't regard them as compilers. > > All things considered, I think I would prefer to > > just color in some translation rules rather than > > building an architecture or code generator > > whenever possible. > > This presupposes that translation is a heavy-weight > process. Its not. A simple translator can be knocked > up in half a day. > > However, if you want a fully featured, general > purpose, OOA translator then it may take a bit > longer. That's where the OTS translators are > needed; and where, IMHO, the benefits of translation > are diminished (but the benefits of OOA are > enhanced). I thought that what I was trying to convey is that I *do* want a fully featured, general purpose OOA translator. The sorts of lightweight tools you are talking about are for very mechanical processes and that is exactly the burden that I want my OTS translator to remove from my shoulders. BTW, I don't disagree with you in practice. We use direct translation of things like hardware bitsheets with simple text manipulation tools. However, this is usually just for bridges. It gets to be a hassle for configuration management and maintenance when different domains or even subsystems in a domain are translated with different tools. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- Mike Finn wrote: > So I thought you were presenting the Application Domain, but in fact > this was your solution-architecture (or OOA-of-Architecture model in > OOA terms). I suggested that the domain containing the Register > object was the Application Domain, but you say it's a Service > Domain. So my question is: Why did you not start with the > Application Domain? The definition of Aplication vs. Service vs. Architectural Domains depend on context. If your problem is architectural then you can view the architecture as the application ! :-) I have found that is can be very useful to focus on architectures without worrying about the whole system. Although, in the context of the system, I know what all the domains are: I chose to ignore them and define a sub-problem in terms of the architectural requirement. I can then do an OOA of this architectural sub-problem; which can now be classed as my application (If you want). However, its still an OOA of architecture. Dave. _________________________________________________________ DO YOU YAHOO!? Get your free @yahoo.com address at http://mail.yahoo.com Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- Mike Finn wrote: > Could you define a few more terms? > IP => Internet Protocol ? > ARM => Acorn RISC Machines ? > VHDL => Very High-level Design Language ? > problem-description => ? It just goes to show how different environments use different acronyms: IP = Intellectual Property ARM = Advanced RISC Machines VHDL = VHSIC Hardware Description Language VHSIC = Very High Speed Intergrated Circuit Problem Decription = a description of the problem :-) ... either formal or informal depending on context Dave. _________________________________________________________ DO YOU YAHOO!? Get your free @yahoo.com address at http://mail.yahoo.com Ross Russell writes to shlaer-mellor-users: -------------------------------------------------------------------- Hello, My name is Ross Russell I am a senior firmware engineer at Fujitsu Computer Products of America Intellistor R&D operation in Longmont Colorado. I am attempting to modify the BridgePoint architecture to produce C code functions instead of C++ for a an embedded tape drive application that is real-time and memory constrained. The first problem is porting the application from Solaris to a Motorola Coldfire microprocessor. The second problem is generating C code from the model information. If anyone has any advice or experiences doing this sort of thing I would appreciate any comments. Thanks in advance for your time. Regards, Ross "Paul Higham" writes to shlaer-mellor-users: -------------------------------------------------------------------- --H1ns5DoRbNZQeBbDqczK1QAQZ8pDk66l Content-type: text/plain; charset="us-ascii" My response to Mike Finn's response to my response to Mike Finn's response to somebody else's submission who is no longer on my stack, is embedded in the following: Responding to Paul Higham... Some other things I would like to see: 1) A semi-permanent technical presence on comp.object to defend and promote S-M. Yes. Good idea. 2) Development of a simplified form of S-M OOA. A version with the concurrent aspects removed to reduce the uncertainty of event generation/execution and data consistency. This would allow newbys an easier way in and also help out the MIS market. 3) Distribution of a Freeware S-M OOA/RD tool to include code generation by Translation. There does not seem to be much money to be made from analysis and design tools anyway. :-) Is that why they are so expensive? :-) 4) Perversely, we must convince the Three Amigos that Translation is a *non-viable* approach. :-) IMHO, it is only a matter of time before the UML/Elaborationists cotton on to the power Translation and then where will we all be? Perverse is the operative word here. If the "UML/Elaborationists" decide that translation is the way to go, then they will rapidly discover that analysis work products have to be as rigorously produced as source code. It would be a SMALL step for Los Amigos, but a giant leap for the software engineering community, to then subscribe to Steve and Sally's vision. Thus contrary to your statement, I believe convergence would be beneficial to S-M OOA/RD. At the risk of repeating myself, I do not think it required that we maintain an adversarial relationship with the proponents of UML; this is both pointless and dangerous. It is not likely that S-M OOA/RD would "win" a full frontal assault, or indeed any other kind of "battle" with the elaborationists. What matters is that the vision is preserved, not that Los Amigos are put out of business. To finish, here is a question I find interesting: is S-M OOA/RD the unique solution to Steve and Sally's vision, in the sense of being a canonical form for all methods satisfying the translation requirements (complete and rigorous analysis work products, explicit separation of problem domains and implementation domains, etc.) or are there many non-equivalent methods? <> paul <> Mike -- Mike Finn Dark Matter | Email: smf@cix.co.uk Systems Ltd | Voice: +44 (0) 1483 755145 --H1ns5DoRbNZQeBbDqczK1QAQZ8pDk66l Content-type: text/plain; charset="us-ascii" Received: from mailgate.nortel.ca (actually bcarsde4) by bcarsca3; Wed, 29 Jul 1998 06:04:01 -0400 Received: from projtech.projtech.com (actually 209.234.139.45) by mailgate; Wed, 29 Jul 1998 06:03:31 -0400 Received: (from majordom@localhost) by projtech.projtech.com (8.8.5/8.8.5) id CAA03197; Wed, 29 Jul 1998 02:52:46 -0700 (PDT) X-Authentication-Warning: projtech.projtech.com: majordom set sender to owner-shlaer-mellor-users@projtech.com using -f X-Envelope-From: smf@cix.compulink.co.uk Date: Wed, 29 Jul 98 10:53 BST-1 From: smf@cix.compulink.co.uk (Mike Finn) Subject: Re: (SMU) Shlaer/Mellor Engineer? To: shlaer-mellor-users@projtech.com Reply-To: smf@cix.compulink.co.uk Message-Id: Sender: owner-shlaer-mellor-users@projtech.com Precedence: bulk Reply-To: shlaer-mellor-users@projtech.com Errors-To: owner-shlaer-mellor-users@projtech.com --H1ns5DoRbNZQeBbDqczK1QAQZ8pDk66l-- peterf@pathfindersol.com (Peter J. Fontana) writes to shlaer-mellor-users: -------------------------------------------------------------------- I've put a new subject line on this thread to better match the discussion. I think we all recognize the difference between "method" and "notation", and given that, most of us could become comfortable with applying the Shlaer-Mellor OOA/RD (SM-OOARD) method using either SM-OOA notation or a subset of UML notation. And for a while papers have been available from more than one source (PT and KC anyway) discussing this very approach. More importantly, the greater UML "community" - vendors and users - considers UML a *notation*, not a specific *method*. I think most of the current marketing of UML concepts and tools sweep method issues in the background. At 05:12 AM 8/7/98 EDT, Paul Higham" : > ... >Thus contrary to your (Mike Finn's - ?) statement, I believe convergence would be >beneficial to S-M OOA/RD. At the risk of repeating myself, I do >not think it required that we maintain an adversarial relationship >with the proponents of UML; this is both pointless and dangerous. Paul, I agree completely with this observation. I am not aware of any direct or focused campaign by any of the "UML camp" against the Shlaer-Mellor method. Rule 7 of "Peter's Laws" (from Peter Diamandis, not myself) is "If you can't beat them, join them, then beat them." I believe that most new adopters of any method will want to use UML notation for the same reason they choose to drive SUVs on paved roads. (). There are a few objective, technical merits - and these aren't even generally known - but the *real* answer is "fashion". If the current polarization of translational /vs/ elaborational can be defused to a great extent - and removed as a barrier for new SM-OOARD adopters - by using the UML notation instead of the SM notation, then this is probably a *good thing*. The naysayers (including jittery management) can be placated by the trendiness of UML, and the practitioners can be free to apply the SM-OOARD method. Of course, all of this implies that some level of tool support is/will be available. Anyone out there want to offer a dissenting opinion? _______________________________________________________ Pathfinder Solutions Inc. www.pathfindersol.com | 888-OOA-PATH | | effective solutions for software engineering challenges| | Peter Fontana voice: +01 508-384-1392 | peterf@pathfindersol.com fax: +01 508-384-7906 | _______________________________________________________| peterf@pathfindersol.com (Peter J. Fontana) writes to shlaer-mellor-users: -------------------------------------------------------------------- Hi all - I was recently asked if the Shlaer-Mellor OOA was used in the development of any "Safety-Critical" software. I was given the folowing definition: >Safety Critical Software: denotes a software function, where a single >deviation from the specified function may cause a hazardous situation. >All software modules which control safety critical system functions are >safety critical. Examples are: >- Control software governing the speed of a medical infusion pump for >drug administration >- Motor control of a moving or rotating device where no mechanical >safety interlocks are possible as with cranes and platforms in opera >houses >- Communication software transmitting the acceleration, speed and >position of the load of a crane. If know about a project that has used Shlaer-Mellor OOA to develop such software, I would greatly appreciate a reply to peterf@pathfindersol.com. Thank you. _______________________________________________________ Pathfinder Solutions Inc. www.pathfindersol.com | 888-OOA-PATH | | effective solutions for software engineering challenges| | Peter Fontana voice: +01 508-384-1392 | peterf@pathfindersol.com fax: +01 508-384-7906 | _______________________________________________________| "Leslie Munday" writes to shlaer-mellor-users: -------------------------------------------------------------------- Try the Space Station. It's what I introduced into the software = development process. Leslie Munday. "Tim Brockwell" writes to shlaer-mellor-users: -------------------------------------------------------------------- >Hello, > >My name is Ross Russell I am a senior firmware engineer at Fujitsu >Computer Products of America Intellistor R&D operation in Longmont >Colorado. > >I am attempting to modify the BridgePoint architecture to produce C code >functions instead of C++ for a an embedded tape drive application that >is real-time and memory constrained. The first problem is porting the >application from Solaris to a Motorola Coldfire microprocessor. The >second problem is generating C code from the model information. > >If anyone has any advice or experiences doing this sort of thing I would >appreciate any comments. > Hi Ross. How constrained, and how firmly set-in-concrete is your target system and development environment? I'm using another OOA tool and have modified its out-of-the-box, "vanilla" C++ Code Gen architecture to produce C++ code for an embedded X86 application running VxWorks and real-time X-Windows. Give me a yell if you wanna chat. Best of Luck, Tim "Tim Brockwell" writes to shlaer-mellor-users: -------------------------------------------------------------------- >I was recently asked if the Shlaer-Mellor OOA was used in the development of >any "Safety-Critical" software. I was given the folowing definition: > >>Safety Critical Software: denotes a software function, where a single >>deviation from the specified function may cause a hazardous situation. >>All software modules which control safety critical system functions are >>safety critical. Examples are: >>- Control software governing the speed of a medical infusion pump for >>drug administration >>- Motor control of a moving or rotating device where no mechanical >>safety interlocks are possible as with cranes and platforms in opera >>houses >>- Communication software transmitting the acceleration, speed and >>position of the load of a crane. > > >If know about a project that has used Shlaer-Mellor OOA to develop such >software, I would greatly appreciate a reply to peterf@pathfindersol.com. Hi Pete. How about the Lockheed Fort Worth F-16 midlife avionics upgrade? I don't think you can get much more safety-critical than jet fighter avionics. Last time I visited Andy Lay and his crew down there, they had about 80 developers doing SEI level 3 development with OOA. That was 2 or 3 years ago. There's also the Lockheed Missiles and Space program called Payload Launch Vehicle here in Huntsville. (I worked on that one for a while.) They're using OOA to build a launch control system for an interceptor missile. Lots of automated ground testing prior to launch, that sort of stuff. Lots of safety issues there. And then there's the Multiple Launch Rocket System (MLRS) upgrade, which I'm currently working on for the Army's Software Engineering Directorate. I'm using OOA/RD on this project as well. You can bet that the guys who sit in this thing while the rockets are firing are concerned about reliability of the controlling software. Of course, these are all military systems, and as such many OOAers may not wish to tout them. After all, these kinds of systems aren't as P.C. as opera house cranes. :-} Take Care, Tim "Lynch, Chris D. SDX" writes to shlaer-mellor-users: -------------------------------------------------------------------- (Thread name-change) Responding to Paul Higham: >To finish, here is a question I find interesting: is S-M OOA/RD >the unique solution to Steve and Sally's vision, in the sense of >being a canonical form for all methods satisfying the translation >requirements (complete and rigorous analysis work products, >explicit separation of problem domains and implementation domains, >etc.) or are there many non-equivalent methods? I'm not sure I understand the question. If SMOOA is "canonical", then by my definition it would be one of a family of equivalent solutions, and therefore not (in my mind) "the unique solution." Independent from the above, I believe that any complete, unambiguous model should be convertible to another formalism which has the same concepts. I have reviewed many modeling methods and have found that they are conceptually very similar and differ mostly in emphasis and packaging. (In this there is a reasonable analogy with programming languages.) You will find a good taxonomy of modeling methods in Jean Paul Calvez's book on Real-Time Embedded Systems (although the translation from the original French leaves a lot to be desired.) Alan Davis' book, "Software Requirements: Objects, Functions, and States" has a lot to say about the benefits of mastering several methods of modeling. -Chris -------------------------------------------- Chris Lynch Abbott Ambulatory Infusion Systems San Diego, Ca LYNCHCD@HPD.ABBOTT.COM ------------------------------------------- Ross Russell writes to shlaer-mellor-users: -------------------------------------------------------------------- Tim Brockwell wrote: > "Tim Brockwell" writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > >Hello, > > > >My name is Ross Russell I am a senior firmware engineer at Fujitsu > >Computer Products of America Intellistor R&D operation in Longmont > >Colorado. > > > >I am attempting to modify the BridgePoint architecture to produce C code > >functions instead of C++ for a an embedded tape drive application that > >is real-time and memory constrained. The first problem is porting the > >application from Solaris to a Motorola Coldfire microprocessor. The > >second problem is generating C code from the model information. > > > >If anyone has any advice or experiences doing this sort of thing I would > >appreciate any comments. > > > Hi Ross. > How constrained, and how firmly set-in-concrete is your target system and > development environment? > > I'm using another OOA tool and have modified its out-of-the-box, "vanilla" > C++ Code Gen > architecture to produce C++ code for an embedded X86 application running > VxWorks and real-time X-Windows. Give me a yell if you wanna chat. > > Best of Luck, > Tim Hello Tim, Thank you for responding, I need all the help I can get. What OOA tool are you using? Did your modifications to the OOA tool product C or C++ code? Thanks for your time. Regards, Ross Gordon Colburn writes to shlaer-mellor-users: -------------------------------------------------------------------- Ross, In 1993 I was part of a fairly large OOA/OOD project. Our case tool was Cadre's Object Team which at the time did not do any code generation, so a colleague and I designed an architecture, built translators which read model data output by the case tool (in CDIF format), populated a meta-model (an OOA-of-OOA), and generated code from it. Your situation is a bit different as bridgepoint should give you a much stronger platform for code translation than what we had. However, like you, we generated C language code. Our system was not real time, but the volume of data and the complexity of the processing in our system required our architecture to be as efficient as possible. Our architecture employed several strategies to meet the performance requirements, the most significant being a flexible indexing scheme which allowed multiple compound indexes to be added to objects without any application code changes. I've never used bridgepoint, so I can't help you with any tool-specific problems, but if you want to discuss issues relating to the generated C code (what sort of C to generate, etc.), feel free to contact me directly. -Gordon -- Gordon Colburn gac@interzone.y.to lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Higham... > To finish, here is a question I find interesting: is S-M OOA/RD > the unique solution to Steve and Sally's vision, in the sense of > being a canonical form for all methods satisfying the translation > requirements (complete and rigorous analysis work products, > explicit separation of problem domains and implementation domains, > etc.) or are there many non-equivalent methods? I agree with Lynch that any system with equivalent rigor and concepts could be substituted for SMOOA. I would carry this one step further and assert that translation can be accomplished with other approaches than S-M RD. I can certainly envision very different architectures for, say, a general ledger than for garage door opener. If the architectures are different, then the translation rules will be different. If it doesn't quack like a duck... However, this leads to an interesting cruise on the Existential Sea. If there could be several flavors of OOA and RD, each equivalently rigorous, could they be plug & play? This would require a bridge for each combination. (We're megathinking here, so let's ignore details like wondering why one would want a bridge to translate to a translator.) This suggests the need for a meta model of all the OOAs that a given RD wants to converse with. Would that meta model not be the One and Only Notation one would need? Is SMOOA already that meta model? I believe the meta model exists but SMOOA isn't it. Moreover, I don't think I'm going to leap on my mule and spend my declining years seeking it because I suspect it will be so abstract that it will require a facility with metaphysics and epistemology to use it. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Fontana... > I think we all recognize the difference between "method" and "notation", and > given that, most of us could become comfortable with applying the > Shlaer-Mellor OOA/RD (SM-OOARD) method using either SM-OOA notation or a > subset of UML notation. And for a while papers have been available from > more than one source (PT and KC anyway) discussing this very approach. > > More importantly, the greater UML "community" - vendors and users - > considers UML a *notation*, not a specific *method*. I think most of the > current marketing of UML concepts and tools sweep method issues in the > background. While I agree that no one is confused about the fact that UML is a notation AND that the current marketing has ignored method issues, I think that in practice there is a strong connection. There is a host of methodologies that make use of the notation and those methodologies share a number of characteristics. Probably most notable is the dependence upon experience and judgment in applying the notation rather than having built-in safeguards based upon mathematical rigor (e.g., relational identifiers). Also notable is the strong dependence of those methodologies on dynamic polymorphism. Since they all support the full UML syntax, they all effectively support elaboration. And many of those methodologies use some variation of functional decomposition for identifying classes, packages, categories, etc. I believe it is a fair statement that UML is associated with those methodologies and, consequently, translation. Demonstrating that the S-M can be expressed in a subset of the UML notation just makes it more difficult to disassociate the S-M methodology from the UML family of methodologies. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- Lahman wrote: > I agree with Lynch that any system with equivalent rigor and > concepts could be substituted for SMOOA. I would carry this > one step further and assert that translation can be > accomplished with other approaches than S-M RD. I, too, agree that OOA is not "the unique solution" to systems modelling. I do have some problems with the statement the S-M RD is not the only translation approach. There is currently so little rigor associated with it that SM-RD says nothing more than "take a rigorous model, map it onto a different rigorous model (repeat zero or more times); and write out code based on these models". This is a generic description of translation. I would agree that there are many ways of doing this but, because SM-RD doesn't define any one particular method, I can't say that there are alternatives SM-RD. (Of course, if you define SM-RD as "translation starting from an SM-OOA model" then it is trivially obvious that there are different, equivalent, methods) "Lynch, Chris D. SDX" wrote: > Independent from the above, I believe that any complete, > unambiguous model should be convertible to another > formalism which has the same concepts I believe you are being too restrictive. The model to which the first model is mapped does NOT need to have the same concepts. all that is necessary is that the meaning of a model under one formalism is that same as the meaning of another model under a second formalism. It is the model that provides the application-level concepts; and therefore these that must be the same. whether or not the formalisms have any common concepts is not important. During an RD process, models will usually (always?) gain information during a translation step. This can cause problems if your purpose is to move from one formalism to another at the same level of abstraction because you need to know what information to lose when you translate back. This is a much greater problem for proponents of elaboration methods who want "round-trip" processes Dave. -- David Whipp, Siemens AG: HL DC PE MC, MchB mailto:whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions expressed are my own. lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > I do have some problems with the statement the S-M RD is > not the only translation approach. There is currently > so little rigor associated with it that SM-RD says nothing > more than "take a rigorous model, map it onto a different > rigorous model (repeat zero or more times); and write out > code based on these models". > > This is a generic description of translation. I would agree > that there are many ways of doing this but, because SM-RD > doesn't define any one particular method, I can't say that > there are alternatives SM-RD. (Of course, if you define > SM-RD as "translation starting from an SM-OOA model" then it > is trivially obvious that there are different, equivalent, > methods) My model for the S-M RD was the course provided by PT, as of several years ago, plus a few tidbits like the wormhole paper. While I agree that the formal exposition of the rigor is shaky, I would still argue that this represents a defined approach to RD. It is at least sufficiently defined that it can be contrasted with ideas that you and others have proposed on SMUG that I think provide viable alternatives to that model, at least in part. > I believe you [Lynch] are being too restrictive. The model to which > the first model is mapped does NOT need to have the same > concepts. all that is necessary is that the meaning of a > model under one formalism is that same as the meaning of > another model under a second formalism. My reading was that Lynch was referring to the underlying semantics when he referred to the "concepts". However, in the interest of picking nits I would argue that the abstractions of the formalism represent concepts that must be shared in some fashion in order to demonstrate the identity of the same model in both formalisms. For example, if you are going to provide an alternative formalism to SMOOA, the concepts of cardinality and relationships have to be expressed in that formalism in some manner -- possibly through systematic composition of fundamental concepts in that formalism. For instance, I believe an SMOOA could be expressed in Jaworski's jMap formalism where "relationship" has a much broader meaning than in SMOOA. Nonetheless the more restrictive concept of a relationship and its cardinality in an SMOOA can be identified as a combination of jMap artifacts. In the jMap world that composition of artifacts represents a standardization (design pattern, if you will) for representing the SMOOA concepts. [Note that I am using "artifact" to mean a jMap formalism concept; its easier to keep straight whose concepts are whose this way.] -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com "Lynch, Chris D. SDX" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp responding to me: [Lynch] >> Independent from the above, I believe that any complete, >> unambiguous model should be convertible to another >> formalism which has the same concepts [Whipp] >I believe you are being too restrictive. The model to which >the first model is mapped does NOT need to have the same >concepts. all that is necessary is that the meaning of a >model under one formalism is that same as the meaning of >another model under a second formalism. >It is the model that provides the application-level >concepts; and therefore these that must be the same. >whether or not the formalisms have any common concepts >is not important. [me again] Maybe I should have said "easily convertible" or "easily ported" instead of merely "convertible". Examples of "concepts" (i.e. "features") which would not be easy to port/convert: 1) a state-modeling notation which indicates response-time constraints. 2) a process modeling notation which offers dynamic publish-and-subscribe type messaging, broadcasting, and operations to determine the existence of other objects in the system (analogous to PC-hardware plug-n-play.) In my mind these "concepts" of the formalism have a radical impact on the structure and meaning of models and make conversion very impractical. (I certainly don't want to get stuck doing the conversion ;-) ) All this is to belabor these truisms: 1) the more expressive and powerful the formalism, the more compact the model. 2) As with human languages, some things just don't translate well. -Chris ------------------------------------------- Chris Lynch Abbott Ambulatory Infusion Systems San Diego, Ca LYNCHCD@HPD.ABBOTT.COM ------------------------------------------- Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- Lahman wrote: > However, in the interest of picking nits I would argue that > the abstractions of the formalism represent concepts that > must be shared in some fashion in order to demonstrate the > identity of the same model in both formalisms. If your going to start picking such nits then perhaps its time to bring up the name Godel also the concept of the Universal Turing Machine. An SM-OOA model cannot represent a non-computable (or non-algorithmic) problem so it can be represented under any formalism which supports computation. If I wish to demonstrate the identity of models under different formalisms that I'll go and find a tame mathematician. However, this is all rather irrelevant. As Chris Lynch clarified, he intended to say: > "easily convertible" or "easily ported" instead of merely > "convertible" This, of course, leads to the problem of defining "easy". The standard way seems to involve working out whwther the conversion process would scale with polynomial or non-polynomial resource requirements. Intuitive feelings for whever a mapping will be easy or hard can be inaccurate. > All this is to belabor these truisms: > 1) the more expressive and powerful the > formalism, the more compact the model. > 2) As with human languages, some things just don't > translate well. And, more to the point, the closer the formalism is to the natural architecture of the problem (i.e. the structure of thought used by experts on that problem): the more powerful the formalism appears to be for that problem; and the more compact the model. With this in mind, I tend towards to beleif that we should aim to a translation approach that combines multiple formalisms and the process flows from application to code architectures. A simple example: I once tried to model a game of pool (or snooker, or billiards) using OOA. I had difficulty with the domain that described the (continuous) movement of balls on a table. I wanted to describe them using a few simple differential equations (and a few thresholds) but OOA doesn't seem to support that. So, instead, I defined my equations and defined an event-interface to an OOA domain that handled events such as balls hitting each other, being hit by a cue and passing speed thresholds. Thus the domain chart had a non-OOA domain that was not an implementation domain. If I wished to translate that system then I would either need non-OOA translation or I would need to manually convert the non-OOA domain into an implementation domain (or contort it into an OOA model). Dave. -- David Whipp, Siemens AG (HL DC PE MC), MchB mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions expressed are my own. lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > > However, in the interest of picking nits I would argue that > > the abstractions of the formalism represent concepts that > > must be shared in some fashion in order to demonstrate the > > identity of the same model in both formalisms. > > If your going to start picking such nits then perhaps its > time to bring up the name Godel also the concept of the > Universal Turing Machine. An SM-OOA model cannot represent > a non-computable (or non-algorithmic) problem so it can be > represented under any formalism which supports computation. > > If I wish to demonstrate the identity of models under > different formalisms that I'll go and find a tame > mathematician. It seems to me you are arguing my point. To demonstrate that two models developed with different formalisms (that both support computation) are identical, that tame mathematician would have to be able to recognize *some* set of concepts, such as sequence, branch, and iteration, in the representations of both models. > However, this is all rather irrelevant. So what else is new? It's Friday afternoon. > And, more to the point, the closer the formalism is to the > natural architecture of the problem (i.e. the structure of > thought used by experts on that problem): the more powerful > the formalism appears to be for that problem; and the > more compact the model. > > With this in mind, I tend towards to beleif that we should > aim to a translation approach that combines multiple > formalisms and the process flows from application to > code architectures. I agree about the value of multiple formalisms. But are you advocating the domino approach -- where they are lined up in sequence and the one on the OOA end is pushed? That is, each formalism incrementally moves form the best problem environment description to the best implementation environment description and, since the formalisms in each adjacent pair in the migration are quite similar, the conversion at each step is relatively trivial and easily formalized in the translator. Sounds rather much like automated elaboration. > Thus the domain chart had a non-OOA domain that was not > an implementation domain. If I wished to translate that > system then I would either need non-OOA translation or > I would need to manually convert the non-OOA domain into > an implementation domain (or contort it into an OOA > model). We tend to have a lot of non-OOA domains lately that are not implementation domains, but that's another story. The thing I would like to clarify is why you feel that the translation of an entire non-OOA domain would be any different that translating a transform within an OOA domain, other than scale. Whether it is a domain oval or a process circle, what's inside is not specified in any way within the OOA, so both need non-OOA translation. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > I agree about the value of multiple formalisms. But are you advocating > the domino approach -- where they are lined up in sequence and the one on > the OOA end is pushed? That is, each formalism incrementally moves form > the best problem environment description to the best implementation > environment description and, since the formalisms in each adjacent pair > in the migration are quite similar, the conversion at each step is > relatively trivial and easily formalized in the translator. Sounds > rather much like automated elaboration. You could call it automated elaboration; but that implies that each step is just adding implementation-dependent information to the previous view. This may be true, but it misses the point that the translation step actually "re-writes" its source model in a formalism which provides direct support for the enhanced implementation detail (support through coloration, adornment, etc., is what I would call indirect support). To call this a "domino" effect is misleading. A system is not a linear sequence of translations. It might be a tree; but may be a more general graph structure (a DAG). You would hope to be able to reuse many of the nodes (and arcs) within this graph; but it is my experience that some manual intervention is needed at each step (if you care enough about the implementation: many software situations do not require this). This manual intervention can sometimes be fed back into the translation as either colorations or extra scripts. > The thing I would like > to clarify is why you feel that the translation of an entire non-OOA > domain would be any different that translating a transform within an OOA > domain, other than scale. Whether it is a domain oval or a process > circle, what's inside is not specified in any way within the OOA, so both > need non-OOA translation. I am not sure what transforms have to do with this issue. I am generally of the opinion that there is no practical difference between a wormhole and a transform (other than that a wormhole is more generalised). All that I am asking is that the definition of recursive design should not assume that all domains are modelled in OOA. Translating non-OOA domains brings out aspects of translation that are not so prominent for OOA translation. Dave. -- David Whipp, Siemens AG (HL DC PE MC), MchB mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions expressed are my own. lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > You could call it automated elaboration; but that implies that > each step is just adding implementation-dependent information to > the previous view. This may be true, but it misses the point that > the translation step actually "re-writes" its source model > in a formalism which provides direct support for the enhanced > implementation detail (support through coloration, adornment, > etc., is what I would call indirect support). Yes, but the indirect support is indispensable -- there is just too much concrete detail that is not explicit in the OOA or that depends upon a particular implementation environment. That detail will necessarily have to be provided appropriately at different points in the sequence. I think you would have a difficult time convincing the elaborationists that there was a substantive difference between sprinkling the indirect support through the steps than for elaborating more detailed models if the process is truly incremental, regardless of the direct support that minimizes the need for indirect support. > To call this a "domino" effect is misleading. A system is not a > linear sequence of translations. It might be a tree; but may be > a more general graph structure (a DAG). I agree. I don't think the pure sequential approach would be efficient in practice -- that's why I asked the question. I would prefer the different formalisms to describe separate aspects, artifacts, or processes and then glue them together via an overall mapping process in the translator. > > The thing I would like > > to clarify is why you feel that the translation of an entire non-OOA > > domain would be any different that translating a transform within an OOA > > domain, other than scale. > > I am not sure what transforms have to do with this issue. I am > generally of the opinion that there is no practical difference > between a wormhole and a transform (other than that a wormhole > is more generalised). The transform is the most obvious example, but my point is that *any* process on the ADFD potentially contains non-OOA code. Even a simple data accessor may require operations like units conversions. That code is not defined in the OOA anymore than a non-OOA domain's code. Therefore I don't see that the translator has to do anything different for a non-OOA domain than for any ADFD process in an OOA domain. I would also argue that even if you regard a transform as a wormhole into a non-OOA domain, that domain still has to be translated. The translator would do nothing different when processing the transform in the OOA domain than it would do in processing a non-OOA domain at the end of an explicitly defined synchronous wormhole. In both cases someone has to provide an interface to the realized code and tell the translator to use that interface for the transform/domain. The translator processes the provided interface the same way in both cases. > All that I am asking is that the definition of recursive design > should not assume that all domains are modelled in OOA. > Translating non-OOA domains brings out aspects of translation > that are not so prominent for OOA translation. I am not sure that the definition of RD is that restrictive. As you pointed out, there is so little existing formalism that I am not sure one can make that judgment. OTOH, since 60-80% of a typical OOA application's code is realized anyway when using ADFDs, I would think RD would be forced to deal with that possibility. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > Yes, but the indirect support is indispensable Agreed > That detail will necessarily have to be > provided appropriately at different points in the sequence. But never by manual modification of derived information - it should be specified at a leaf node and propogated in (though you may only be able to work out what is needed by examining derived information). > I think you would > have a difficult time convincing the elaborationists that there was a > substantive difference between sprinkling the indirect support through the > steps than for elaborating more detailed models if the process is truly > incremental, regardless of the direct support that minimizes the need for > indirect support. Elaborationists and translationists both require some form of adornment. It is a necessary consequence of propogating models into code. The difference is how that information is used. The result of elaboration of a model is a new model that has a structure consistant with the adornments (well, it isn't really re-structured: its just more detailed; and the structure of the new detail is consistant with the adornments). The result of translation is the same (though genuine restructuring is more likely than with elaboration). The difference lies is the repeatability of the process. You can try several different ways of translating the model (with different adornments or interpretations) to find the optimal one. (Indeed, you might even build an expert system that does the adornments for you: this would then be just another domain in the system) > The transform is the most obvious example, but my point is that *any* process > on the ADFD potentially contains non-OOA code. Even a simple data accessor may > require operations like units conversions. That code is not defined in the OOA > anymore than a non-OOA domain's code. You make two different statements: "[any process] potentially contains non-OOA code" and "That code is not defined in the OOA." I agree with the latter version. My point is that an ADFD process is fully defined within the OOA. It is defined by the use of a terminal symbol (the name of the process) which must be either bridged or translated to another domain. (if your CASE tool lets you enter a mapping directly using a process description language that its just helping you to construct mapping: its not putting the process description within the OOA -- even it it thinks it is :-)). But, yes, if the process is mapped to a non-OOA domain (which isn't an implementation domain) then it does require non-OOA translation. > I would prefer the different formalisms to describe separate aspects, > artifacts, or processes and then glue them together via an overall > mapping process in the translator. I may be quibbling, but I don't like the term "glue them together". A couple of years ago I had a conversation about metaphors for system construction. One view was that its like connecting a cable loom - you just need to connect the cables to their correct places. The other view was that its more like combining different coloured lights. When you combine the pieces you get a new colour. This could also be expressed in terms of layers on a map (you have your sewers, your roads, your electicity grid, etc. -- they each have an internal structure (e.g. water flows down hill unless you pump it) but they must exist within the landscape of the other layers) When I want away and though about it, there are actually a few more ways of thinking about such metaphores. You can think about fractals: when you zoom in, you lose sight of the big picture; but the structures you see still reflects the outer structure. I could go on; but the term "glue them together" feels like the cabling metaphore - fine as far as it goes; but it misses the richness of some of the other ways of looking at interconnection. I also don't really like the concept of an "overall mapping process" -- its a bit too centralised. However, there will usually be some representation of the system build process so that you can just type "make" to set the whole thing going. Of course, if you require zero-downtime (or live) upgrades then things could get very interesting. Dave. -- David Whipp, Siemens AG (HL DC PE MC), MchB mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions expressed are my own. Allen Theobald writes to shlaer-mellor-users: -------------------------------------------------------------------- Greetings! The following questions center around, "The Project Matrix: A Model for Software Engineering Project Management, IEEE". Is this still a "valid" management tool? Does anyone use it? Would anyone care to share their styles/examples in generating requirements documents and software specifications? Most PM discussions revolve around philosophy and provide little practical knowledge. As usual any refs (books/mags/web sites) on the above are, indeed, appreciated! Kind Regards, Allen Theobald "Lynch, Chris D. SDX" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Theobald: >Would anyone care to share their styles/examples in generating >requirements documents and software specifications? We have used the IEEE SRS format successfully. >Most PM discussions revolve around philosophy and provide little >practical knowledge. >As usual any refs (books/mags/web sites) on the above are, indeed, >appreciated! You mentioned "practical"; "Managing a Programming Project" by Phillip Metzger is very practical, with checklists for every phase of the project. Others have observed that the main determiner of programmer productivity is the skill and experience of the individuals. The project matrix cannot change that. One thing to watch out for in using the project matrix is the revisiting of domains which were checked "done". If you have new analysts, expect a fair amount of this. Also, architecture tends to be a poorly estimated effort, in my experience. -Chris ------------------------------------------- Chris Lynch Abbott Ambulatory Infusion Systems San Diego, Ca LYNCHCD@HPD.ABBOTT.COM ------------------------------------------- lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > > That [indirect support] detail will necessarily have to be > > provided appropriately at different points in the sequence. > > But never by manual modification of derived information - it should > be specified at a leaf node and propogated in (though you may only > be able to work out what is needed by examining derived information). True. > Elaborationists and translationists both require some form of > adornment. It is a necessary consequence of propogating models into > code. The difference is how that information is used. > > The result of elaboration of a model is a new model that has a > structure consistant with the adornments (well, it isn't really > re-structured: its just more detailed; and the structure > of the new detail is consistant with the adornments). I'll buy that as a significant difference in the mechanics of the process. But I think the argument is that the similarity still exists in substance because one still incrementally goes from OOA to implementation descriptions -- the path is just different. > The result of translation is the same (though genuine restructuring > is more likely than with elaboration). The difference lies is the > repeatability of the process. You can try several different ways > of translating the model (with different adornments or > interpretations) to find the optimal one. True, the rigor of the approaches are quite different. But I think the argument above applies here as well. > > The transform is the most obvious example, but my point is that *any* process > > on the ADFD potentially contains non-OOA code. Even a simple data accessor may > > require operations like units conversions. That code is not defined in the OOA > > anymore than a non-OOA domain's code. > > You make two different statements: "[any process] potentially contains > non-OOA code" and "That code is not defined in the OOA." I agree with > the latter version. I regard "OOA code" as the state action descriptions and, if they are sufficiently detailed, the DC's bridge descriptions. Aside from the events themselves, these are the only places where algorithmic processing is described. I regard all the other code that is necessary to make the application actually work and which the developer must provide as "non-OOA code". Therefore, in this view, almost all processes contain (i.e., are placeholders for) non-OOA code. I am belaboring this point because it is something that has always bothered me about S-M. You can crib some of that non-OOA code from libraries and some of it is easily generated generically (e.g., accessing data stores from a particular database). However, there is still a lot of algorithmic code that is specific to the application that is buried in processes, especially transforms, that is not addressed by an OOA. Regarding a transform as a wormhole to a non-OOA support domain simply defines a conceptually handy place to relegate this undefined code -- it is still associated with the process. Thus S-M OOA really represents a high level description of the large scale processing of the system. The rigor at that level leads to robust, maintainable, and reliable systems and I think that it is quite scalable. The downside, though, is that as much as 60% of the system may be left as an exercise in coding. When we look at our defect rates for released products those defects are very rarely described in the OOA; they are almost always buried in transforms or the non-OOA domains. Similarly, when we enhance the system the changes are often almost trivial if they can be done exclusively in the OOA but they become more troublesome if they need to be handled in the non-OOA code. (Fortunately not *too* troublesome because of the OOA partitioning of non-OOA code to atomic units like processes.) The bottom line is that although the OOA partitions the non-OOA code nicely, it provides no guidance or rigor for developing that non-OOA code. That is, this non-RD translation is even less well defined than RD itself but it can represent a major fraction of the system. A secondary problem is that processes like transforms produce data that may later be tested to determine flow of control at the OOA model level. If the transform code is not visible, the model simulator cannot properly simulate the behavior of the model -- to do so the developer must supply those values and that is an error-prone test process. > My point is that an ADFD process is fully defined within the OOA. > It is defined by the use of a terminal symbol (the name of the > process) which must be either bridged or translated to another > domain. (if your CASE tool lets you enter a mapping directly using > a process description language that its just helping you to construct > mapping: its not putting the process description within the OOA -- > even it it thinks it is :-)). I agree that the CASE tools slip in a lot of stuff that is not defined in the OOA. But I disagree that the ADFD process is fully defined except in the strictest syntactic sense. It is simply an abstraction for processing that is not defined at all, regardless of where it is conceptually located. That processing is often application specific so that the application cannot execute without it and it cannot be created by a generic translator. Whether the developer writes the code manually or the architect tweaks a customized translation, that processing is defined and implemented outside the OOA/RD. > I may be quibbling, but I don't like the term "glue them together". > I could go on; but the term "glue them together" feels like the > cabling metaphore - fine as far as it goes; but it misses the > richness of some of the other ways of looking at interconnection. My metaphor is a molecule with lots of different flavors of atoms glued together with electrostatic bonds. This is a nice model if maintainability and plug & play are your primary concerns (i.e., interchanging one type of atom for another or changing the lattice structure). Clearly a nice fit for the domain chart, but perhaps not so nice a fit for the translation formalisms. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Allen Theobald writes to shlaer-mellor-users: -------------------------------------------------------------------- > You mentioned "practical"; "Managing a Programming Project" by > Phillip Metzger is very practical, with checklists for every phase of > the project. Which edition? The 2nd (1980's) or the 3rd (1995)? sometimes the later editions aren't nearly as useful as earlier ones. :^) -Allen "Lynch, Chris D. SDX" writes to shlaer-mellor-users: -------------------------------------------------------------------- I no longer have the book and don't know the edition (but I used it in the 80's.) -Chris > -----Original Message----- > From: Allen Theobald [SMTP:theobaam@EMAIL.UC.EDU] > Sent: Wednesday, August 19, 1998 2:36 AM > To: shlaer-mellor-users@projtech.com > Subject: Re: (SMU) The Project Matrix > > Allen Theobald writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > > You mentioned "practical"; "Managing a Programming Project" > by > > Phillip Metzger is very practical, with checklists for every phase > of > > the project. > > Which edition? The 2nd (1980's) or the 3rd (1995)? sometimes the > later editions > aren't nearly as useful as earlier ones. :^) > > -Allen > "Dana Simonson" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to lahman & Whipp... >>.....The downside, though, is that as much as 60% of the system may be left as an exercise in coding.... Why? We have a current project which, as of the last time we ran line counts on the source, is over 99% generated code. Our toolset translates directly from the A/SDFD models (no action language) and a database of associated data (descriptions, attributes types, etc). Granted, we have made a few extensions to OOA, but none of them break the spirit of the method. For instance, we have replaced the Transform process with two processes: A Set process which has an expression (we allow most basic math operators), and a Call process which invokes a hand coded function. The translator generates all code associated with the Set process, and inserts a function call for the Call process. We have also formalized the text content of the process bubbles. Except for the processes which require an expression (Set, Read Where, Write Where...) the code for all processes can be generated from the context of the process on the A/SDFD. No arbitrary process descriptions are necessary. By including a simple expression capability to the formalism (math and comparision operators) the processes with expressions can also be directly translated. The only hand coding we do is writting the functions behind the Call process and realized domains (accessed via wormholes). <<<>>><<<>>><<<>>><<<>>><<<>>><<<>>><<<>>><<<>>> Dana Simonson Engineering Section Manager Transcrypt Operations - Waseca dsimonson@transcrypt.com www.transcrypt.com "Michael M. Lee" writes to shlaer-mellor-users: -------------------------------------------------------------------- At 04:32 PM 8/18/98 +0000, you wrote: >Allen Theobald writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >Greetings! > >The following questions center around, "The Project Matrix: A Model >for Software Engineering Project Management, IEEE". > >Is this still a "valid" management tool? Yes, I think it's very helpful in producing a work breakdown structure for the work is any area where you can define a common set of development steps (rows) for a set of system components (columns). In S-M, the process steps are the modeling, model coloring/allocation, and remaining translation steps that are not automated, and the system components are subsystems. IMO, one of the key factors to making the project matrix (or any other task identification/tracking scheme) work well is to have a sufficiently fine resolution on the process steps that they are relatively small (measured in weeks) tasks and have well defined completion criteria. This makes them easier to estimate, track, and control. I do not think the OOA models (OIM, SM, PM), which are usually the rows shown on a Project Matrix, offer sufficient resolution. For example, I break the OIM into the following steps: 1. Write technical note capturing/clarifying the requirements that are to be modeled. (See Leon Starr's book for good examples of this). Review with domain specialists. 2. Build preliminary OIM (no object, attribute, relationship descriptions). Hold a walkthrough with subsystem team members. Assess the degree to which model captures requirements documented in technical note. If new requirements are uncovered, include revision of technical note into the next step. 3. Revise per walkthrough and complete OIM. Distribute for formal review. 4. Hold formal review of OIM. Again, review against the requirements captured in the technical note and note any newly discovered or changed requirements. 5. Revise per review. > >Does anyone use it? Yes, I use it whenever I'm helping my clients identify and schedule the work on their S-M projects. > >Would anyone care to share their styles/examples in generating >requirements documents and software specifications? This is usually client driven for me. > >Most PM discussions revolve around philosophy and provide little >practical knowledge. > >As usual any refs (books/mags/web sites) on the above are, indeed, >appreciated! I'll second Chris's Phillip Metzger reference. Cheers - Michael > >Kind Regards, > >Allen Theobald > -------------------------------- M O D E L I N T E G R A T I O N Model Based Software Development 500 Botany Court Foster City, CA 94404 mike@modelint.com 650-341-2544(v) 650-571-8483(f) --------------------------------- Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > Responding to Whipp... > I'll buy that as a significant difference in the mechanics of the > process. But I think the argument is that the similarity still > exists in substance because one still incrementally goes from OOA > to implementation descriptions -- the path is just different. But then, the only substantive difference between elaboration and translation is the path. Notations, and even modelling formalisms, are not important. > ... a lot of algorithmic code that is specific to the application > that is buried in processes, especially transforms, that is not > addressed by an OOA. Regarding a transform as a wormhole to a > non-OOA support domain simply defines a conceptually handy place > to relegate this undefined code -- it is still associated > with the process. [...] > as much as 60% of the system may be left as an exercise in coding. > When we look at our defect rates for released products those > defects are very rarely described in the OOA; they are almost > always buried in transforms or the non-OOA domains. [...] > The bottom line is that although the OOA partitions the non-OOA > code nicely, it provides no guidance or rigor for developing that > non-OOA code. I am agree that some detail is not provided in the OOA. To complete the model is is necessary to specify that he process mean. However,there is a lot that can be done even without rigorous low level specification. If you ignore wormholes that all processes either have no side effects or have well defined side effects. This means that it is possible to construct test plans from the model; and get test coverage statistics. Wormholes can be handled during component testing because that are the stimuls-response ports of the model. One area that is not possible is domain-testing (domain testing is testing boundary conditions). This is not possible because an OOA model does not specify the transforms, the tests nor the filters (for accessors). You are right to point out that this does need to be sorted out (I have never found it to be a problem because most of my transforms are not more complex than standard arithmetic and logic functions ... but, strictly speaking, it is wrong even to assume the definition of "+"). My preference is to use some form of declarative specification for processes. Many people do not like declarative specifications. The objections are exemplified in Steve's latest paper (Precise Action Specifications for UML) where he says: "there is often a need to include some level of algorithmic specification to ensure efficient execution." I do not believe that this is a valid concern for an OOA model because much of the supposed inefficiency is already embedded in the formalism of the ADFD. (non-functional requirements are supported through coloration). The lack of side effects in the processes allows declarative specification through pre- and post- conditions to be a realistic approach. Once a formal specification of the required behaviour of a test, transform, or filter (in an accessor) is defined it becomes possible to complete the functional-test case analysis and to do translation. Translators aready exist for some formal specification languages so it is not unreasonable to propose such translation. Alternatively, the formal pre- and post- conditions enable rigorous testing of hand-written code (and, if desired, of a library component). There is a potential issue if you use infinite sets for attribute-domains; but this is a standard problem: (e.g. "forall x in {Natural}: succ(x) == x+1" is not true for finite implementations). Dave. -- David Whipp, Siemens AG (HL DC PE MC), MchB mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own; factual statements may be incorrect. "Stephen R. Tockey" writes to shlaer-mellor-users: -------------------------------------------------------------------- Michael M. Lee replied to Allen Theobald > >The following questions center around, "The Project Matrix: A Model > >for Software Engineering Project Management, IEEE". > > > >Is this still a "valid" management tool? > > Yes, I think it's very helpful in producing a work breakdown structure > for the work is any area where you can define a common set of development > steps (rows) for a set of system components (columns). In S-M, the process > steps are the modeling, model coloring/allocation, and remaining translation > steps that are not automated, and the system components are subsystems. Lots of agreement here. > IMO, one of the key factors to making the project matrix (or any other > task identification/tracking scheme) work well is to have a sufficiently > fine resolution on the process steps that they are relatively small (measured > in weeks) tasks and have well defined completion criteria. This makes them > easier to estimate, track, and control. Lots more agreement here. In my most sucessful uses of of a project matrix, we aimed for making each project matrix cell (process step X for subsystem Y) be as close to two weeks of calendar time as possible. We did our best to never have any less than one calendar week nor more than three calendar weeks. This was partly done by adjusting the number of people we assigned to each cell, for example if the original estimate was more than 3 person- weeks then we assigned 2 or more people to that cell. IMHO, you'd never want to let a cell be more than 3 calendar weeks of time or it stands a chance of getting out of control. OTOH, when cells are very short duration (1 week or less) it's too easy for people to get the feeling that they are being micro-managed. The "aim for 2 calendar weeks" seems to strike a nice balance between keeping the project under control and not giving the team members the impression they are being micromanaged. I also want to echo "... and have a well defined completion criteria". But I'd also add that a) those completion criteria are the basis for an inspection/ review/walkthrough (i.e., the criteria are the inspection checklist), and b) the corresponding project matrix cell is not marked as complete until the inspection/review/walkthrough ends in acceptance of the work-product (be sure to include inspection preparation, actual inspection, and subsequent re-work time in the estimate for each cell) BTW: I've always wanted to try the following experiment but was never quite able to fit it into a real project. The theory is that if I am responsible for completing step N for some component in a project matrix where I was _not_ the person who did the deliverable for that component's step N-1, then I will be particularly picky in the review/inspection/ walkthrough for the step N-1 deliverable to be sure that it is adequate for me to do my job. I'd want to be sure that as much of the important information as possible was in the document, not just in the head of the author. So instituting these two rules: The person who does step X for component Y _cannot_ do step X+1 for component Y, and The person(s) responsible for step X+1 for component Y _must_ be a reviewer in the review/inspection/walkthrough for the step X of component Y document seems to go a long way towards making sure that the necessary information gets put into the documentation and that the reviews/inspections/ walkthroughs are particularly effective. I have noticed such a big difference in the effectivity of the reviews/ inspections/walkthroughs where this situation happened by accident that it seems worthwhile trying more globally. > I do not think the OOA models (OIM, SM, PM), which > are usually the rows shown on a Project Matrix, offer sufficient resolution. I guess I disagree with this. I think the key difference is that I'd say "adjust the number of people assigned to the cell to bring the flow time as close as possible to two weeks" rather than move to finer-grained steps. I'll explain this in the context of Mike's suggestions. > For example, I break the OIM into the following steps: > > 1. Write technical note capturing/clarifying the requirements that are > to be modeled. (See Leon Starr's book for good examples of this). Review > with domain specialists. My concern with technical notes used in this form is twofold. First, everything in the technical note ends up being redundant with something in one or more of the later models. Thus, when the OIM, SM, and PM are done the technical note is entirely replaced by the actual models. I'm concerned with doing the work more than once (especially when I can then throw one of those away) as well as if I decide to keep the technical notes around then there's the problem of having to maintain identical information in more than one place. Second, given the built-in ambiguities etc. of natural language, it's usually hard to tell when time spent on technical notes is really adding value (i.e., increasing our knowledge) vs. just re-packaging the knowledge we already have. It's simply too easy for the non-value-added time to take a huge bite out of the project without anyone really noticing until it's too late. I prefer to reserve technical notes for those critical items that we just can't seem to find a way to express in the existing models. > 2. Build preliminary OIM (no object, attribute, relationship descriptions). > Hold a walkthrough with subsystem team members. Assess the degree to which > model captures requirements documented in technical note. If new >requirements are uncovered, include revision of technical note into the > next step. My concern with this step is that it appears to violate the "... and have a well-defined completion criteria" guideline. With no descriptions, it's really difficult to see that all the normalization has been done properly and that what I interpret X to mean is that same thing that everyone else interprets X to mean. Maybe I'm being a bit too pessimistic here, but I see too much room for waffling in this. I can see holding an _informal_ meeting as a sort of mid-course correction kind of thing, but I'd personally be wary of basing project management status on such an informal thing. > 3. Revise per walkthrough and complete OIM. Distribute for formal review. > > 4. Hold formal review of OIM. Again, review against the requirements > captured in the technical note and note any newly discovered or changed > requirements. > > 5. Revise per review. > > > > >Does anyone use it? > > Yes, I use it whenever I'm helping my clients identify and schedule > the work on their S-M projects. Ditto on the yes, but be aware that the project matrix is a useful tool whenever the work can be broken down in two dimensions (a series of consistent process steps applied to a set of product components). In other words, it doesn't have to be a S-M project. It doesn't even have to be a software project. > >Would anyone care to share their styles/examples in generating > >requirements documents and software specifications? > > This is usually client driven for me. I'll reserve comment on this one as I think there are actually some interesting philisophical discussions about a) "just what in the heck is a 'requirement' anyway?", b) to what extent should 'requirements' be written in natural language documents vs. 'formal' specifications, and things like that. But more often than not, these are really driven by the organizational policies of the place that's paying for the work (e.g., "we want to see documents X, Y, and Z, and this is what each of them is supposed to say"). Off on a tangential, but still marginally related, topic: the packaging of technical content into deliverable documents seems to be more driven by the vagaries of the configuration management/change management policies and procedures than anything else. But that's another topic for another time... > >Most PM discussions revolve around philosophy and provide little > >practical knowledge. > > > >As usual any refs (books/mags/web sites) on the above are, indeed, > >appreciated! My first exposure to project management was in a book written by Meilir Page-Jones. I think it was called "Practical Project Management: Restoring Quality to Projects and Systems" or something like that. I thought it was very enlightening for a techie like me. OTOH, it's fairly basic project management stuff and there's plenty more advanced material that's likely more appropriate for an in-the-trenches project manager (e.g., all of Barry Boehm's risk-based stuff). Regards, -- steve lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Simonson... > >>.....The downside, though, is that as much as 60% of the system may be left as an exercise in coding.... > > Why? > > We have a current project which, as of the last time we ran line counts on the source, is over 99% generated code. Our toolset translates directly from the A/SDFD models (no action language) and a database of associated data (descriptions, attributes types, etc). Granted, we have made a few extensions to OOA, but none of them break the spirit of the method. For instance, we have replaced the Transform process with two processes: A Set process which has an expression (we allow most basic math operators), and a Call process which invokes a hand coded function. The translator generates all code associated with the Set process, and inserts a function call for the Call process. > > We have also formalized the text content of the process bubbles. Except for the processes which require an expression (Set, Read Where, Write Where...) the code for all processes can be generated from the context of the process on the A/SDFD. No arbitrary process descriptions are necessary. By including a simple expression capability to the formalism (math and comparision operators) the processes with expressions can also be directly translated. > > The only hand coding we do is writting the functions behind the Call process and realized domains (accessed via wormholes). Actually, 60% is a conservative guess for our applications. Part of this is simply unique to our applications, like the need for translating languages below. First, you are doing hand coding with the extensions you have added to your processes that allow you to specify rigorous, translatable expressions for you process descriptions. In effect, when you write down the expression you are writing code because all the translator does is reformat it into another language. When we were doing ADFDs, we used a simple phrase to describe the process like "Compute average". The methodology supports only three pieces of information for an ADFD process: identifier, description, and process type. (Two in practice because the type can be inferred from context.) There is no requirement that the description itself be translatable. I believe that your "simple expression capability" is crucial. This allows you to simplify your ADFDs by a great deal. If your Read, Write, and Test processes really are so simple that the code can be generated from context without colorization, then I will bet a significant fraction of the "generated" code's executable statements come from these expressions. Second, the OOA does not define the bridge code. Except for the lowest level hardware register R/Ws (which we generate automatically from write accessors) our bridges tend to be complicated despite our best efforts to minimize the intelligence of bridges. For example, we have a device driver with nearly 400 API bridge functions with -- at a guess -- an average approaching a dozen executable statements in each. Third, we are forced to do a lot of language/format translation. We have to deal with dozens of languages and formats for our input data. S-M does not handle this well because there are very few active objects -- you wind up with a mongo switch statement and a lot of little hand coded switch action routines. In extreme cases, for example translating the ATLAS digital specification, we need a whole domain to do this because we need temporary storage due to the bizarre way that ATLAS is specified. In this case the domain has 30-odd objects of which three are active and two of those are subtypes. This means that there is a lot of mundane processing of data in transforms. [Apropos of nothing, my personal opinion is that the digital part of ATLAS was designed by Old Guard Contractors with the primary purpose of preventing COTS test vendors from entering the DoD marketplace, but I digress...] While we interpret/translate a lot more input than most people probably do, I think this is still a significant problem. You may be able to bury it in the bridges, but things like units conversions and other reformatting have to be done somewhere. A similar argument applies in general to device drivers. Basically a device driver simply moves a pile of bits in Format A from Over Here to Over There in Format B. There tend to be few objects with life cycles but there is a lot of passive objects for temporary storage and there is a lot of format conversion. Thus a device driver is basically Yet Another Translator/Interpreter. Our main device driver happens to be quite large with thousands of registers to play with but the game itself is not very exciting. Fourth, we do a lot of algorithmic processing outside of the device driver, for example in diagnostic routines. A transform process titled "Get next probe point" might have thousands of lines of code behind it. This involves pedestrian graph algorithms for circuit walking with some heuristics thrown in for performance reasons (the problems tend to be np-Complete). This is not something you do in state models. In our present diagnostic package only one of three domains uses state models but the bulk of the code is in the two realized domains. Fifth, even when we were doing ADFDs, we deliberately aggregated processes simply because the ADFDs were getting too complicated. The main reason was that it was a pain to reroute the data paths when you had to insert a process in the middle of the rat's nest. Life is too short to waste significant time on making diagrams readable. [At the time we were doing ADFDs we did not have a code generator, so this aggregation had no cost associated with it since we did manual code generation anyway.] If I were doing generation from ADFDs as you do, I think I would still be willing to trade off the time messing with the diagrams for some added manual coding in the called functions. Sixth, because different domains often represent different levels of abstraction, the degree to which computations are specified can vary greatly. We have a number of domains where complex data structures are represented as a single attribute that is really a handle to a realized data structure (e.g., a bitmap). At the domain's level of complexity it is not concerned with the structure of the data or that it is an aggregate. However, the domain may incidentally need to know something about it, such as whether it contains some particular value. In this case the ADFD would have something like: A, B ----------> Test: is A referenced in B? ------>> data flows for Yes and No. That test process is going to be realized code somewhere and, depending upon the complexity of B, it could be a substantial amount of code. Note that it _has_ to be realized code because otherwise B would be a reference to an object instance from another domain, which is a no-no. [Note that this points out a corollary problem in that test processes are often not simple comparisons.] Seventh, I suspect that you may be doing a lot of colorization for the translation that is, in fact, hand coding. For example, our hardware models will have boolean attributes that really map to different bits in the same register. We use a mechanism where the write accessor for the attribute actually invokes a wormhole to do a R/M/W to the hardware register. This is clearly application specific so the code generator needs to know which bit in which register is associated with each attribute. [It also need to know which attributes are bits, etc.] Somebody has to build a big, honking table to describe this for the particular application. Building that table is effectively hand coding -- it's just less verbose. And it will require debugging just like code. Last, I don't know how your translator is able to generate code from all processes based upon context in general. The only process where this is always possible is the event generator. All create, delete, read, write, transform, and test processes potentially require manual code because the OOA does not define the low level details. For example, I have never seen a significant non-MIS application that did not require a units conversion somewhere. All the OOA supports is a range of values -- it is left as an exercise for the developer to associate units with those values and ensure that those units are consistently processed. Similarly, create and delete processes can be a problem when there are conditional relationships combined with a simultaneous view of time. Admittedly this is usually a matter of picking the right spot to invoke the process, but there are situations where you need to tinker with the innards. Also it is not uncommon for a create process to compute attributes based upon input data flow values. [I know, OOA96 constrains all data flow values to be attributes, but (a) they don't have to be attributes of the instance being created and (b) I agree with other reviewers that the constraint is unworkable.] Finally, read and write processes may require tweaking for units conversions and the like. [Note that prior to OOA96 one could pass two values to a write accessor and have it write a single attribute calculated from those values.] In the example I gave above where the write accessor invokes a wormhole, that invocation can be easily handled by a template of some sort in theory. [I am so glad the New Oxford Dictionary has legitimized split infinitives! But I digress again...] However, life is rarely so simple in practice. In our case there are so many exceptions (e.g., not all register fields are single bits or some 24 bit values are split between 16 bit registers) that we had to build an entire domain with thousands of lines of code to do this. [In fairness, I think Dave Whipp would argue that this domain is more architecture than application. By design we can use it with a different device driver and/or different hardware by replacing the table descriptions.] -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com "Dana Simonson" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to lahman... >...we used a simple phrase to describe the process like "Compute average". >The methodology supports only three pieces of information for an ADFD process: >identifier, description, and process type.... Unless operating on discrete attributes, Compute average would have to be implemented using a Call process. Process identifiers in a fully translated ADFD are meaningless and can be removed. Description is only necessary for documentation of complex processes, since we we chose to analyze the content of these processes rather than leave them to be hand coded at a later date, descriptions are seldom needed. Type resolves to the kind of process. For instance a read process has a identifier inflow and attribute outflows. Therefore, all it needs in the process bubble is 'Read'. >I believe that your "simple expression capability" is crucial. This allows you to >simplify your ADFDs by a great deal. If your Read, Write, and Test processes really are >so simple that the code can be generated from context without colorization, then I will >bet a significant fraction of the "generated" code's executable statements come from >these expressions. True. Add Delete, Create and Generate, and Set and you have the majority of all processes on our ADFDs. >Second, the OOA does not define the bridge code. Sorry, we added a tad bit of definition here too. Our 'bridges' are analyzed using Wormholes and corresponding SDFDs in a 'proxy' object in the server domain. >You may be able to bury it in the bridges, but things like units conversions and other >reformatting have to be done somewhere. I think unit conversions should be handled in one of two ways. Either analyze them, or use attribute types which get resolved into base type classes which know how to convert themselves. >Fifth, even when we were doing ADFDs, we deliberately aggregated processes simply because >the ADFDs were getting too complicated. I think it was Leon Starr who made the statement that complicated ADFDs point to either incorrect object modeling or incomplete state modeling. > A, B >----------> Test: is A referenced in B? ------>> data flows for Yes and No. > Yes, this would be realized in a Call process. >That test process is going to be realized code somewhere and, depending upon the >complexity of B, it could be a substantial amount of code. Complex actions can be represented as a comparison when attributes are classes. For instance A == B comparison where A's base type was a string and B's was an integer would cause B to be converted to a string and the resultant strings to be compared to see if all characters are the same and in the same position, and the strings are the same length. The 'realized code' is in the architectural domain. >Seventh, I suspect that you may be doing a lot of colorization for the translation... At this time, only objects have color, and they only specify whether the object is dymanic RAM based or the type of database table. >Last, I don't know how your translator is able to generate code from all processes based >upon context in general. The only process where this is always possible is the event >generator. All create, delete, read, write, transform, and test processes potentially >require manual code because the OOA does not define the low level details. Create, Delete, Read and Write are architecture issues. The analyisis doesn't care whether the attributes are in memory, database table retrieved from a remote server via TCP/IP... The identifiers and attribute inflows and outflow define the actions. The architecture implements them. >For example, I have never seen a significant non-MIS application that did not require a >units conversion somewhere. All the OOA supports is a range of values -- it is left as >an exercise for the developer to associate units with those values and ensure that those >units are consistently processed. Unit conversions are analyzed not hidden. Our application is a trunked radio processing system (implemented as an NT service) which both controls the radio infrastructure hardware and communicates with other processors and stand-alone user interface applications via TCP and UDP messaging. It is also capable of generating e-mail notifications using SMTP when error conditions occur. Non-MIS and I think significant. (57MB of generated C++ source) >...prior to OOA96 one could pass two values to a write accessor and have it write a >single attribute calculated from those values... Again, this should be analyzed to show what is happening. First hit a Set process to do the calculation then flow the result to the Write. <<<>>><<<>>><<<>>><<<>>><<<>>><<<>>><<<>>><<<>>> Dana Simonson Engineering Section Manager Transcrypt Operations - Waseca dsimonson@transcrypt.com www.transcrypt.com "Paul Higham" writes to shlaer-mellor-users: -------------------------------------------------------------------- The sense in which I meant "canonical" is in the same sense that Chomsky or Griebach normal forms are canonical forms for the grammar of a context-free language, or that Jordan canonical form is for a linear transformation. As such a canonical form is something to which we can convert different representations to see if they represent the same thing. A system specification may be presented in different ways, but can "translatable" or "executable" specifications always be reduced/transformed in to an S-M OOA representation? Chris you are right that my original wording of the question was not correct, I should have said "unique up to isomorphism" rather than "unique", but the original question was not intended to have complete mathematical precision, just to stimulate some thought. <> paul <> In message "(SMU) Method comparison (Thread name-change)", shlaer-mellor-users@projtech.com writes: >"Lynch, Chris D. SDX" writes to shlaer-mellor-users: >-------------------------------------------------------------------- > > (Thread name-change) > > Responding to Paul Higham: > > >To finish, here is a question I find interesting: is S-M OOA/RD > > >the unique solution to Steve and Sally's vision, in the sense >of > >being a canonical form for all methods satisfying the >translation > >requirements (complete and rigorous analysis work products, > >explicit separation of problem domains and implementation >domains, > >etc.) or are there many non-equivalent methods? > > I'm not sure I understand the question. If SMOOA is "canonical", >then by my definition it would be one of a family of equivalent >solutions, and therefore not (in my mind) "the unique solution." > > Independent from the above, I believe that any complete, >unambiguous model should be convertible to another formalism which has >the same concepts. I have reviewed many modeling methods and have found >that they are conceptually very similar and differ mostly in emphasis >and packaging. (In this there is a reasonable analogy with programming >languages.) > > You will find a good taxonomy of modeling methods in Jean Paul >Calvez's book on Real-Time Embedded Systems (although the translation >from the original French leaves a lot to be desired.) Alan Davis' book, >"Software Requirements: Objects, Functions, and States" has a lot to say >about the benefits of mastering several methods of modeling. > > -Chris >-------------------------------------------- >Chris Lynch >Abbott Ambulatory Infusion Systems >San Diego, Ca LYNCHCD@HPD.ABBOTT.COM >------------------------------------------- > > > > Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > Seventh, I suspect that you may be doing a lot of colorization for the > translation that is, in fact, hand coding. For example, our hardware > models will have boolean attributes that really map to different bits > in the same register. We use a mechanism where the write accessor for > the attribute actually invokes a wormhole to do a R/M/W to the hardware > register. This is clearly application specific so the code generator > needs to know which bit in which register is associated with each > attribute. [It also need to know which attributes are bits, etc.] > Somebody has to build a big, honking table to describe this for the > particular application. Building that table is effectively > hand coding -- it's just less verbose. And it will require debugging > just like code. [...] > However, life is rarely so simple in practice. In our case there > are so many exceptions (e.g., not all register fields are single bits > or some 24 bit values are split between 16 bit registers) that we > had to build an entire domain with thousands of lines of code to do > this. [In fairness, I think Dave Whipp would argue that this domain > is more architecture than application. By design we can use it with > a different device driver and/or different hardware by replacing the > table descriptions.] I would argure that this is an ideal situation for an application-specific translator. (I'm sure you've mentioned automatically generated bridges in previous posts - this is just as extention of those). It should be reasonably simple to construct a model (probably just the OIM) to the concepts for packing bitfields into registers. This provides the basis for the translation (yes, bitfield-register mappings can get quite complex; but its not beyond the capability of information modelling). You should already have most of the information about the registers themselves in an electronic format. I have generally had it available as a Framemaker document (which I then save as HTML). It's not too difficult to extract the information from tables if you can convert your information to HTML (or any other text format). In the worst case, you may have to type in the information yourself; but you would need such documentation anyway so thats not an additional overhead (and anyway, construction of population data is a recognised part of SM). An application specific translator should contain a definition of both the structure of the document and the information structure that you want to populate; and should then populate the latter from the former. You may want to manually translate the information model into the data structures of your translator. Using translators to build translators can get out of control very quickly. Once you've populated your model, you can then do the second stage of the translation - define the structure of the code to be generated. I would expect this to differ from the normalised representation of the OIM. You then need to map the population of your OIM onto this architectural model. Finally, it should be simple to dump out the architectural population using code templates. If you look at the manual code that you need to write for this process then it can be partitioned into: A model of the registers Population data A translator to map the population data into the model An architecture A translator to map the register model onto the architectural model Code templates Code to populate the templates (should be very reusable) I would argue that non of these really constitute manual code generation in the classical sense. They are all work-products beyond the basic application domain of the OOA; but that such additional work is inevitable as soon as you go beyond the assumption that all translation is done using a single, monolithic, compiler-translator. Dave. -- David Whipp, Siemens AG (HL DC PE MC), MchB mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own; factual statements may be incorrect. Allen Theobald writes to shlaer-mellor-users: -------------------------------------------------------------------- Thanks to all those who replied to the original question. It seems that the project matrix is still considered a useful project management tool. Would anyone who has used the project matrix share it with the list? I am interested in seeing concrete examples of its usage. Kind Regards, Allen P.S. Thanks for the ones I got, but I am still looking for even more project management refs. lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > But then, the only substantive difference between elaboration and > translation is the path. Notations, and even modelling formalisms, > are not important. True -- IF one chooses to use the sequential translation for formalisms approach. If one uses to octopus model where a central Mapper combines the formalisms in a simultaneous (continuous?) fashion, that would not be the case. > However,there is a lot that can be done even without rigorous low > level specification. If you ignore wormholes that all processes > either have no side effects or have well defined side effects. > This means that it is possible to construct test plans from the > model; and get test coverage statistics. Wormholes can be handled > during component testing because that are the stimuls-response > ports of the model. > > One area that is not possible is domain-testing (domain testing is > testing boundary conditions). This is not possible because an OOA > model does not specify the transforms, the tests nor the filters > (for accessors). You are right to point out that this does > need to be sorted out (I have never found it to be a problem > because most of my transforms are not more complex than > standard arithmetic and logic functions ... but, strictly > speaking, it is wrong even to assume the definition of "+"). Just to clarify my problem with domain simulation... Since the OOA does not provide any specification of even the simplest transform operation, like +, the simulation can't know the value of the resulting data element. If that element is later referenced in the condition of a test (e.g., x > 0) the model simulator cannot know which data flow to follow out of the test. > My preference is to use some form of declarative specification for > processes. Many people do not like declarative specifications. The > objections are exemplified in Steve's latest paper (Precise Action > Specifications for UML) where he says: "there is often a need to > include some level of algorithmic specification to ensure efficient > execution." > > I do not believe that this is a valid concern for an OOA model > because much of the supposed inefficiency is already embedded in > the formalism of the ADFD. (non-functional requirements are > supported through coloration). The lack of side effects in the > processes allows declarative specification through pre- and post- > conditions to be a realistic approach. While I agree for the most part, I think there may be some exceptions. For example, suppose I pass X into a Test, "Is X Green or Blue?". If it requires a lot of work to determine if X is Green but it is trivial to determine if it is Blue, then this places a constraint on the efficiency of the test that is not handled by the context or pre-/post-conditions. [Admittedly a weak example because the test itself is likely to know which condition is more complicated to check, but one can extend this to more complicated situations, such as operations on two sets, where the process might need guidance (i.e., the caller knows a priori but the process needs to do work to determine the optimal strategy).] -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Tockey... > Lots more agreement here. In my most sucessful uses of of a project matrix, > we aimed for making each project matrix cell (process step X for subsystem > Y) be as close to two weeks of calendar time as possible. We did our best to > never have any less than one calendar week nor more than three calendar > weeks. This was partly done by adjusting the number of people we assigned > to each cell, for example if the original estimate was more than 3 person- > weeks then we assigned 2 or more people to that cell. I would second this view. Several years ago we ran a QIT to reduce scheduling overruns. The QIT ran an experiment where the person estimating a real project did it twice. The first time was our usual way at that time and the second way was to decompose all of the larger tasks from the first pass into tasks that were no longer than 2 weeks of unadjusted effort (as opposed to duration). Lo and behold, the second pass produced an estimate that was 30% greater. So two weeks appears to be a Magic Number, whether it is effort or elapsed time. > So instituting these two rules: > > The person who does step X for component Y _cannot_ do step X+1 > for component Y, and > > The person(s) responsible for step X+1 for component Y _must_ > be a reviewer in the review/inspection/walkthrough for the > step X of component Y document > > seems to go a long way towards making sure that the necessary information > gets put into the documentation and that the reviews/inspections/ > walkthroughs are particularly effective. > > I have noticed such a big difference in the effectivity of the reviews/ > inspections/walkthroughs where this situation happened by accident that > it seems worthwhile trying more globally. FWIW, we do not seem to observe this problem. I think, though, that this may because we culturally tend to drive greater detail into the earlier development phases. [We figured out even before OO that coding is the least important activity in software development and all the issues should be resolved in the earlier specifications so that the coding can be done with a large bag of bananas and a reasonable intelligent orangutan. This is one of the reasons that S-M appealed to us -- the focus on unambiguous specification and auto code generation.] Thus missing information becomes a major defect for the review that all the reviewers are conditioned to look for. Another reason may be that our reviewers are never shy about pointing out such things. It probably helps that on large projects we often do not know who will be executing which task in the next phase when the current phase's specifications are being reviewed. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com "Stephen R. Tockey" writes to shlaer-mellor-users: -------------------------------------------------------------------- Allen Theobald wrote: > P.S. Thanks for the ones I got, but I am still looking for > even more project management refs. It would help to get more idea of what kinds of project management references you are looking for. It's really quite a broad topic area and there a number of good books around. Two of my all-time favorites are: Tom DeMarco and Tim Lister, "Peopleware", Dorset House, ~1988 Robert Block, "Politics of Projects", Yourdon Press, ~1984 Although on the surface neither of these looks entirely project management related, they are actually incredibly relevant and useful. There's also Fred Brooks' classic "The Mythical Man Month", but I do have a few disagreements with some of his recommendations. I think that Robert Townsend's "Up The Organization" is another classic, but it's more management-in-general than software project management specific. I also got a lot of valuable insight out of Steve Robbin's "Essentials of Organizational Behavior", but I don't have my copy handy to give you the full reference. Eliayu (sp) Goldratt has a number of good books around, "The Goal", "The Haystack Syndrome", ... but these are more manufacturing-management and I haven't figured out how to apply his ideas to software engineering management yet. There's something worthwhile in there, now if I could just figure out how to apply it to software... I'd consider much of Gerry Weinberg's work to be software project management-related and I have yet to find a Weinberg book that I didn't really enjoy reading. Since much of project management has to do with metrics, see Conte, Dunsmore, Shen, "Software Engineering Metrics and Models", Benjamin Cummings, 1986. When I took a software project management course at Seattle U, the textbook was Kezsbom, Schilling, Edward, "Dynamic Project Management", Wiley, 1989. I'd have to say that the book is reasonably useful but it's not quite top-shelf material. Now, I've probably probably pointed you at several hundred dollars worth of a shopping list along with 12 months or so of pretty intense reading. You still want more??? :^) Cheers, -- steve PS. I recently got a copy of John Hoschette's "Career Advancement and Survival for Engineers", Wiley, 1994. Not exactly project management material, but definitely self-management material. Put it in the category of "things I wish I had known when I was a whole lot younger..." or "why didn't they teach us the _really_ important stuff when I was in school?". lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Simonson... One point I should have been more clear about. My 60% number includes architectural customizations and whatever one has to do to colorize (e.g., generating attribute tables, etc.) for a particular application. Most of the points below hinge on this. A second point that may need clarification is: 60% of What? Generated code is usually not a good measure of program size because it tends to be highly bloated and mind-bogglingly redundant. [We have an application that weighed in at 80+ Mb of object code generated and 10 Mb manually coded.] We happen to use an action language and my guesstimate was mostly based upon that. When I say that 60% of the system is realized I essentially mean that less that 40% of all the action language will be for state actions. Roughly 50+% of the action language is for synchronous services (i.e., transforms, bridges, and other complex processes) and 5-10% in is in application-specific architectural customizations for translation. You do things quite differently than we do, but my assertion is that you probably still have a fairly large chunk of any of your applications that would not defined in a pure OOA. You have defined that information in architectural customizations (evidently more than we have) and in enhancements to the ADFDs that allow action language-like coding in the ADFDs an which support the architectural customizations. > >...we used a simple phrase to describe the process like "Compute average". > >The methodology supports only three pieces of information for an ADFD process: > >identifier, description, and process type.... > Unless operating on discrete attributes, Compute average would have to be implemented using a Call process. Process identifiers in a fully translated ADFD are meaningless and can be removed. Description is only necessary for documentation of complex processes, since we we chose to analyze the content of these processes rather than leave them to be hand coded at a later date, descriptions are seldom needed. Type resolves to the kind of process. For instance a read process has a identifier inflow and attribute outflows. Therefore, all it needs in the process bubble is 'Read'. I think I have missed your point. The Call is going to contain realized code, right? And it is needed because the OOA does not define the processing, right? > >You may be able to bury it in the bridges, but things like units conversions and other > >reformatting have to be done somewhere. > I think unit conversions should be handled in one of two ways. Either analyze them, or use attribute types which get resolved into base type classes which know how to convert themselves. I don't think they should be analyzed (see several paragraphs below). If you define types as an enhancement to the OOA, you're still adding information by hand (albeit trivial if the translator is smart). Also, someone has to build the architecture (albeit reusable over many applications for the routine stuff like mks units). BTW, I'm afraid I don't like the idea of adding types to the OOA because it strikes me as creeping implementation. Data typing seems to me to be the realm of the architecture, not the analysis. > >Fifth, even when we were doing ADFDs, we deliberately aggregated processes simply because > >the ADFDs were getting too complicated. > I think it was Leon Starr who made the statement that complicated ADFDs point to either incorrect object modeling or incomplete state modeling. I disagree with the strength of that statement. For one thing there is a lot of boiler plate in ADFDs to collect identifiers through a succession of relationships that access different data stores. We typically have 20-50 objects in a domain with a correspondingly large set of relationships. Just getting the identifiers for the instance with the relevant data can require navigation of several relationships, each of which requires at least one process. That is a very verbose mechanism to describe relationship navigation. When you have a lot of passive or specification objects, as we do, you spend a lot of bubbles getting the data that a relatively simple action needs to process. I also feel that it is important to be clear that complicated ADFDs do not _necessarily_ indicate incorrect modeling. I believe that the IM, SM, and PM models not only represent different views of the system, they also represent different levels of abstraction. The states one designates for active objects are based upon a higher level of abstraction than the data flows and they are identified by considering flow of control and the intrinsic properties of the system at a higher level. If the active objects and states have been identified correctly when doing those models, then the complexity of the ADFDs is irrelevant. This is strictly a matter of how much low level functionality _happens_ to correlate to the state abstraction. The fact that badly formed models serendipitously have complex ADFDs is simply a diagnostic flag for reviewers that suggests that there _might_ be a problem. > >That test process is going to be realized code somewhere and, depending upon the > >complexity of B, it could be a substantial amount of code. > Complex actions can be represented as a comparison when attributes are classes. For instance A == B comparison where A's base type was a string and B's was an integer would cause B to be converted to a string and the resultant strings to be compared to see if all characters are the same and in the same position, and the strings are the same length. The 'realized code' is in the architectural domain. True, this would be a common thing to standardize in the architecture if string/integer conversions were commonplace in one's typical applications. However, if the current application, among all those likely to be built in a given environment, is the only one that does such a conversion, then I would regard that as hand coding even if it is placed in the architecture. The point I was trying to make is that a typical application has a lot of tests and transforms that are specific to that application and which are sufficiently abstract to result in complicated implementations. For example, a descriptive line, "Sort Gnomes by Height" is probably a nice abstraction for most state actions. And it will almost certainly be a single transform in the ADFD. But the implementation could be 200+ lines for an optimized quicksort. If you happen to have a library routine capable of dealing with gnomes, by all means tuck that sucker in the architecture. But it is still 200+ lines of code that the developer provides for that transform. > >Seventh, I suspect that you may be doing a lot of colorization for the translation... > At this time, only objects have color, and they only specify whether the object is dymanic RAM based or the type of database table. Fascinating. Then I don't understand how it is that most of your elementary processes (read, write, etc.) can be translated directly from context. It seems to me that you would have to at least color a concrete base type for each attribute. I assume this is related to my not understanding how you analyze attribute units in the OOA. Or do you regard the definition of type for a specific attribute as a translation rule for the application rather than a colorization? > >Last, I don't know how your translator is able to generate code from all processes based > >upon context in general. The only process where this is always possible is the event > >generator. All create, delete, read, write, transform, and test processes potentially > >require manual code because the OOA does not define the low level details. > Create, Delete, Read and Write are architecture issues. The analyisis doesn't care whether the attributes are in memory, database table retrieved from a remote server via TCP/IP... The identifiers and attribute inflows and outflow define the actions. The architecture implements them. I guess I wasn't making myself clear so let me belabor for a moment with due diligence. Consider an example of a create accessor for an object with one data attribute, say Volume. If the accessor is handed a data flow value that corresponds to a volume and it is in the correct units, then what you say is true. One can easily construct a general purpose translator that will Just Work for all applications whose Create accessors match this model. If, though, the create accessor is given two data flow elements, Mass and Density, then that create accessor has to calculate the Volume value. That is not an architectural issue in my view because the calculation depends upon problem space knowledge (i.e., the physical laws by which Volume, Density, and Mass are related). You can customize the translator to Do the Right Thing but you are still effectively hand coding the processing for the create accessor. My argument is that for all of the processes except the event generator this sort of application-specific issue _might_ arise in the general case. Thus it is not possible to write a general purpose translator that can process ADFDs and always generate correct code for Create, Delete, Read, or Write based simply upon OOA information. At best the general purpose translator may be able to identify situations when it can't translate based upon context. I believe that the litmus test is whether a general purpose translator would Do the Right Thing. If it won't for a particular application then the architecture, the coloring, or the translation rules need to be customized. If so, one is effectively hand coding to do processing not specified in the OOA. > >For example, I have never seen a significant non-MIS application that did not require a > >units conversion somewhere. All the OOA supports is a range of values -- it is left as > >an exercise for the developer to associate units with those values and ensure that those > >units are consistently processed. > Unit conversions are analyzed not hidden. I assume by this that you mean the analyst will stick a transform into the ADFD to do the conversion. While this is syntactically valid, I can't say that I like it very much. The problem I see is that it adds to the ADFD clutter to solve a problem that the methodology deliberately relegates to the architecture. The methodology treats data abstractly and it dumps things like data typing into the RD. It seems to me that to be consistent one should not have data conversions in the OOA because they imply strong data typing -- an explicit conversion in the ADFD reeks of implementation. For example, what if your OOA conversion assumes a value is real volts but the architect decided integer millivolts would be faster for some mind numbing iteration elsewhere in the application? By placing that conversion in the OOA (and the specification of type in the IM) you have usurped one of the architect's prerogatives. > >...prior to OOA96 one could pass two values to a write accessor and have it write a > >single attribute calculated from those values... > Again, this should be analyzed to show what is happening. First hit a Set process to do the calculation then flow the result to the Write. OK, but I argue that you have effectively hand coded the processing in the ADFD through the use of the enhancements that you have introduced. Put another way, you have introduced enhancements to the ADFD notation that allow you to express computations in the ADFD that would have been realized code if only the method's ADFD notation was used. Code that isn't defined in a methodology ADFD is still being written -- you are just doing it in your version of the ADFD notation. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > I would argure that this is an ideal situation for an > application-specific > translator. (I'm sure you've mentioned automatically generated bridges > in previous posts - this is just as extention of those). In effect, that is pretty much what we did. By making is a domain it became reusable because we could plug the domain into other applications. However, the innards of the domain were effectively the translator. There were three steps. We had a hook in our existing code generator to create a routine call for each relevant (colored) write accessor with arguments of Field Id and Value. The Field ID was, obviously, the link to the specific attribute. The domain that translates field to register provides the code for the routine call. That domain was initialized for the specific hardware from the mongo Field table. That domain then did direct VXI register reads and writes. Now suppose that instead of a domain to field the function w/ Field ID and Value we had a translator to create code for a routine with a mongo jump table on Field ID to do the register writes. That routine could have been created with a translator whose innards were the exactly the same as the domain. Instead of call a VXI routine it would do the printfs for the routines in the jump table. Instead of the input bridge function it would have a driver to iterate over the Field table entries. The downside of our approach is performance -- effectively we interpret every write accessor into the proper VXI reads and writes. If we had translated instead the writes would have been hard coded directly. (Fortunately in the manually coded version this was not a big issue because everything was essentially table lookups anyway.) The upside is that we can plug the domain into other applications. We could have translated for other applications as well. But looking far down the road, it might be desirable for our users to define the hardware (i.e., it ain't our tester). This way they could do that by providing the tables that instantiated the domain without having to build anything. [Very clever of us, right? Actually, we just didn't have enough experience with RD at the time to think of doing it your way -- it was our first large S-M project. ] -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com "Dana Simonson" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to lahman... >You do things quite differently than we do, but my assertion is that you probably still have a fairly large chunk of any of your applications that would not defined in a pure OOA. You have defined that information in architectural customizations (evidently more than we have) and in enhancements to the ADFDs that allow action language-like coding in the ADFDs an which support the architectural customizations. Vice versa... We extended ADFDs to allow direct translation and then customized the architecture to support it. >BTW, I'm afraid I don't like the idea of adding types to the OOA because it strikes me as creeping implementation. Data typing seems to me to be the realm of the architecture, not the analysis. >.... >For example, what if your OOA conversion assumes a value is real volts but the architect decided integer millivolts would be faster for some mind numbing iteration elsewhere in the application? By placing that conversion in the OOA (and the specification of type in the IM) you have usurped one of the architect's prerogatives. I disagree. If I define my attribute as being of type volts. I can then work with it in volts, multiply by 1000 to get millivolts etc. If the architecture chooses to store the value in microvolts squared, then so be it. From the point of view of the OOA, this is irrelevant. All of the operations had better still work. The architecture is responsible for any necessary type conversions needed to allow the OOA to work in its native volts. All attributes need a type as part of their definition in oder to make them useful. This is not the base type or architectural storage type, it is the type as defined by the OOA problem space. >Or do you regard the definition of type for a specific attribute as a translation rule for the application rather than a colorization? Sorry, I understand your point now. I guess you could view the typing as colorization, I was viewing it as part of the OOA attribute definition. (Every attribute has an OOA type. OOA types are defined in terms of archtecture base types and ranges. The translator then turns these into data types as it sees fit. For example;Volts -> number ranging from 0 to 540 with a precision of 3 decimal places -> long (represented in millivolts) >If, though, the create accessor is given two data flow elements, Mass and Density, then that create accessor has to calculate the Volume value. We do not allow this. Only attributes can flow into create processes. If mass, density and volume are all present in the OOA then, I think the conversion between then is properly at that level of abstraction. We would flow mass and density into a Set process and flow Volume out of the set and into the Create. >OK, but I argue that you have effectively hand coded the processing in the ADFD through the use of the enhancements that you have introduced. Put another way, you have introduced enhancements to the ADFD notation that allow you to express computations in the ADFD that would have been realized code if only the method's ADFD notation was used. Code that isn't defined in a methodology ADFD is still being written -- you are just doing it in your version of the ADFD notation. Agreed. The difference being that all documentation is contained on the ADFD drawings. There is no chance that the hand coding can significantly change what is documented by the diagrams. <<<>>><<<>>><<<>>><<<>>><<<>>><<<>>><<<>>><<<>>> Dana Simonson Engineering Section Manager Transcrypt Operations - Waseca dsimonson@transcrypt.com www.transcrypt.com Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > Responding to Whipp... > > > But then, the only substantive difference between elaboration and > > translation is the path. Notations, and even modelling formalisms, > > are not important. > > True -- IF one chooses to use the sequential translation for formalisms > approach. If one uses to octopus model where a central Mapper combines > the formalisms in a simultaneous (continuous?) fashion, that would not be > the case. Whether translation is sequential or concurrent; whether you use elaboration or just plain hacking: if the start point (the model[s]) and the end point (the code) are the same for each case then the only difference is in the route between them. Of course, there may be a difference in the code produced by the different methods (bug mechanisms are different). This may persuade someone to choose one method over another. This is a property of the route itself, not of adornments on the initial models (which is where this sub-thread started). > Just to clarify my problem with domain simulation... Since the > OOA does not provide any specification of even the simplest transform > operation, like +, the simulation can't know the value of the resulting > data element. If that element is later referenced in the condition of a > test (e.g., x > 0) the model simulator cannot know which data flow to > follow out of the test. The problem is more direct than that: if the predicate of the test is not specified in the OOA then it doesn't matter what the input data is: the simulation still can't determine the result. > > [use declaritive specification ... don't consider performance] > While I agree for the most part, I think there may be some exceptions. > For example, suppose I pass X into a Test, "Is X Green or Blue?". If it > requires a lot of work to determine if X is Green but it is trivial to > determine if it is Blue, then this places a constraint on the efficiency > of the test that is not handled by the context or pre-/post-conditions. Presumably the process would have 3 outputs: X-is-green, X-is-blue and OTHER. The activation of these outputs can easily be specified. If I need to pass information to the architecture about the cost of the checks (or priority of evaluation; or suggested algorithm) then I can just get out my highlighter pens and colour the outgoing flows. For more complex proceses I may need to colour specific clauses within the specificiations; but the principle is the same. It may be that the best practical use of a declaritive specification is to use it to define [rigorous] testing of a hand-written implementation. Tests/Transforms have no states nor side effects so the problem is manageable. If necessary then proof tools can be used. The real issue is that the OOA model must be grounded somewhere and execution semantics defined at this level. Whatever technique is used to ground it, it should not be implementation biased. However, it should also be possible to use colorations to add additional information that can be used later in constructing an implementation. As soon as you allow any implementation considerations to creep into the model then analysts become designers because they must worry about the execution efficiency of their models. It is essential that the analyist is able to separate the functional information from the performance information; even when both must be specified. Dave. -- David Whipp, Siemens AG (HL DC PE MC), MchM mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Simonson... > I disagree. If I define my attribute as being of type volts. I can then work with it in volts, multiply by 1000 to get millivolts etc. If the architecture chooses to store the value in microvolts squared, then so be it. From the point of view of the OOA, this is irrelevant. All of the operations had better still work. The architecture is responsible for any necessary type conversions needed to allow the OOA to work in its native volts. All attributes need a type as part of their definition in oder to make them useful. This is not the base type or architectural storage type, it is the type as defined by the OOA problem space. Consider the situation where the Architect has a library C++ class to handle MKS scaling where data elements are the value and the scale factor. This class handles the scaling by overloading the arithmetic operators. Now executing the conversion process would, at best, be redundant and, at worst, generate incorrect calculations. So the architect must do work to recognize conversion processes and completely ignore them. When the architect has to ignore ADFD processes I argue that it is a sign that something is being put in the OOA that shouldn't have been there. > >Or do you regard the definition of type for a specific attribute as a translation rule for the application rather than a colorization? > > Sorry, I understand your point now. I guess you could view the typing as colorization, I was viewing it as part of the OOA attribute definition. (Every attribute has an OOA type. OOA types are defined in terms of archtecture base types and ranges. The translator then turns these into data types as it sees fit. For example;Volts -> number ranging from 0 to 540 with a precision of 3 decimal places -> long (represented in millivolts) We are in Quibble City now, but I think you have extended the OOA definition here as well. There is no "OOA type" in the OOA; the only thing that is defined is the data domain, which is described descriptively. The Range abstraction is closest because the specification of units is allowed, but even that is descriptive. To demonstrate the point of my quibble here, consider the example in OOSA on the bottom of pg. 37: "0-500 Kelvin". I could specify exactly the same data domain as "Absolute degrees 0-500" for another attribute. That would present some interesting problems for a translator in the general case. What I *think* you have done is provided an additional syntactic formalism in your descriptions that provides things like dimensions and scale for the translator, but the analyst must adhere to the conventions when writing descriptions. > >If, though, the create accessor is given two data flow elements, Mass and Density, then that create accessor has to calculate the Volume value. > > We do not allow this. Only attributes can flow into create processes. If mass, density and volume are all present in the OOA then, I think the conversion between then is properly at that level of abstraction. We would flow mass and density into a Set process and flow Volume out of the set and into the Create. Interesting. Such a create *is* supported in the methodology and this is even a likely situation if the created attribute is derived. Instinctively I am bothered by the separation of Set and Create for derived attributes. However, I spent twenty minutes coming up with an incredibly elaborate example involving FASB maintenance in an accounting system that, in the end, didn't demonstrate a basis for my worry. Perhaps not a big surprise in the overall scheme of things. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > > > But then, the only substantive difference between elaboration and > > > translation is the path. Notations, and even modelling formalisms, > > > are not important. > > > > True -- IF one chooses to use the sequential translation for formalisms > > approach. If one uses to octopus model where a central Mapper combines > > the formalisms in a simultaneous (continuous?) fashion, that would not be > > the case. > > Whether translation is sequential or concurrent; whether you use > elaboration or just plain hacking: if the start point (the model[s]) > and the end point (the code) are the same for each case then the > only difference is in the route between them. I didn't phrase that very well -- it took me a minute to remember what I was talking about. I was still on the issue of whether there is a difference between translation and elaboration. What I should have responded was something to the affect that if one chooses sequential translation of formalisms, then there is no significant difference between the approaches because both paths look pretty much the same. But if one chooses the concurrent approach to translation, the paths are very different and that this constitutes a significant difference in the approaches. [Which I think was agreeing with a point you made earlier.] > The problem is more direct than that: if the predicate of the test is > not specified in the OOA then it doesn't matter what the input data > is: the simulation still can't determine the result. True, but I don't think it is significant to simulation unless the result affects flow of control. Unless there is a test somewhere that directly or indirectly depends upon the value of the result AND that test determines whether a particular event is generated or a particular instance is created/deleted, the simulation probably doesn't care about the result value. Of course this depends upon what one regards as "correct" model simulation behavior. I usually regard model simulation as a tool to demonstrate correct flow of control at the event level, correct instance instantiation, and correct referential integrity but not necessarily the correctness of attribute values. > > > [use declaritive specification ... don't consider performance] > > While I agree for the most part, I think there may be some exceptions. > > For example, suppose I pass X into a Test, "Is X Green or Blue?". If it > > requires a lot of work to determine if X is Green but it is trivial to > > determine if it is Blue, then this places a constraint on the efficiency > > of the test that is not handled by the context or pre-/post-conditions. > > Presumably the process would have 3 outputs: X-is-green, X-is-blue > and OTHER. The activation of these outputs can easily be specified. > If I need to pass information to the architecture about the cost of > the checks (or priority of evaluation; or suggested algorithm) then I > can just get out my highlighter pens and colour the outgoing flows. > For more complex proceses I may need to colour specific clauses > within the specificiations; but the principle is the same. I misunderstood. I thought you were proposing that the declarative specification could stand upon its own given context and per-/post-conditions (i.e., it was sufficient). > The real issue is that the OOA model must be grounded somewhere and > execution semantics defined at this level. Whatever technique is > used to ground it, it should not be implementation biased. However, > it should also be possible to use colorations to add additional > information that can be used later in constructing an implementation. > > As soon as you allow any implementation considerations to creep into > the model then analysts become designers because they must worry about > the execution efficiency of their models. It is essential that the > analyist is able to separate the functional information from the > performance information; even when both must be specified. But haven't you been contending for awhile now that implementation considerations are sometimes unavoidable in the OOA? Or are you simply arguing that performance related implementation considerations need not appear in the OOA? -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com "Michael M. Lee" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responses are embedded - Michael At 09:42 AM 8/20/98 -0500, you wrote: >"Stephen R. Tockey" writes to shlaer-mellor-users: >-------------------------------------------------------------------- > > >Michael M. Lee replied to >Allen Theobald > >> >The following questions center around, "The Project Matrix: A Model >> >for Software Engineering Project Management, IEEE". >> > >> >Is this still a "valid" management tool? >> >> Yes, I think it's very helpful in producing a work breakdown structure >> for the work is any area where you can define a common set of development >> steps (rows) for a set of system components (columns). In S-M, the process >> steps are the modeling, model coloring/allocation, and remaining translation >> steps that are not automated, and the system components are subsystems. > >Lots of agreement here. > >> IMO, one of the key factors to making the project matrix (or any other >> task identification/tracking scheme) work well is to have a sufficiently >> fine resolution on the process steps that they are relatively small (measured >> in weeks) tasks and have well defined completion criteria. This makes them >> easier to estimate, track, and control. > >Lots more agreement here. In my most sucessful uses of of a project matrix, >we aimed for making each project matrix cell (process step X for subsystem >Y) be as close to two weeks of calendar time as possible. We did our best to >never have any less than one calendar week nor more than three calendar >weeks. This was partly done by adjusting the number of people we assigned >to each cell, for example if the original estimate was more than 3 person- >weeks then we assigned 2 or more people to that cell. Yes, I agree entirely on the task size, but there is a limit to the amount of time by which you can reduce a task duration by increasing the number of people working on that task. In order to reach this 1-3 week size, sometimes further partitioning the task is your only option. More on this later... > >IMHO, you'd never want to let a cell be more than 3 calendar weeks of time >or it stands a chance of getting out of control. OTOH, when cells are very >short duration (1 week or less) it's too easy for people to get the feeling >that they are being micro-managed. The "aim for 2 calendar weeks" seems to >strike a nice balance between keeping the project under control and not >giving the team members the impression they are being micromanaged. > >I also want to echo "... and have a well defined completion criteria". >But I'd also add that > a) those completion criteria are the basis for an inspection/ > review/walkthrough (i.e., the criteria are the inspection > checklist), and Hmmm... yes and no. If my completion criteria for the OIM is correct syntax and accurate (relative to requirements) semantics, I don't quite have a checklist against which to review, especially for the semantics. Am I missing something here or am I putting too much content into the "checlist"? > b) the corresponding project matrix cell is not marked as > complete until the inspection/review/walkthrough ends in > acceptance of the work-product (be sure to include inspection > preparation, actual inspection, and subsequent re-work time > in the estimate for each cell) Yes, absolutely! > >BTW: I've always wanted to try the following experiment but was never >quite able to fit it into a real project. The theory is that if I am >responsible for completing step N for some component in a project matrix >where I was _not_ the person who did the deliverable for that component's >step N-1, then I will be particularly picky in the review/inspection/ >walkthrough for the step N-1 deliverable to be sure that it is adequate >for me to do my job. I'd want to be sure that as much of the important >information as possible was in the document, not just in the head of the >author. > >So instituting these two rules: > > The person who does step X for component Y _cannot_ do step X+1 > for component Y, and > > The person(s) responsible for step X+1 for component Y _must_ > be a reviewer in the review/inspection/walkthrough for the > step X of component Y document > >seems to go a long way towards making sure that the necessary information >gets put into the documentation and that the reviews/inspections/ >walkthroughs are particularly effective. > >I have noticed such a big difference in the effectivity of the reviews/ >inspections/walkthroughs where this situation happened by accident that >it seems worthwhile trying more globally. I've had the same experience, and regularly enforce the second rule whenever there's a hand off of a work product. Enforcing the first rule is often unrealistic on tight schedules. > >> I do not think the OOA models (OIM, SM, PM), which >> are usually the rows shown on a Project Matrix, offer sufficient resolution. > >I guess I disagree with this. I think the key difference is that I'd >say "adjust the number of people assigned to the cell to bring the flow >time as close as possible to two weeks" rather than move to finer-grained >steps. I'll explain this in the context of Mike's suggestions. Here's where I think you can easily get into a "mythical man-month" situation where adding any more people actually slows the work down (worst case) or doesn't speed it up any more ("best" case). WRT the particular steps I offered below, they were just an example of one way to add finer granularity to the work. I don't think there's any perfect or right way to do this -- different projects and staffing profiles will move you in different directions. The point was simply to define smaller tasks so things could be tracked better and teams "getting lost" could be recognized and helped sooner. Nonetheless, I will atempt to further clarify my thinking on these particular tasks, since I have found them useful in real-worl projects. > >> For example, I break the OIM into the following steps: >> >> 1. Write technical note capturing/clarifying the requirements that are >> to be modeled. (See Leon Starr's book for good examples of this). Review >> with domain specialists. > >My concern with technical notes used in this form is twofold. First, >everything in the technical note ends up being redundant with something >in one or more of the later models. Thus, when the OIM, SM, and PM are >done the technical note is entirely replaced by the actual models. I disagree on the models replacing the technical notes for a few reasons: * The model does not capture the thinking and trade-offs that went into formulating the conceptual model that the OOA captures. Understanding this is frequently important for maintaining and revising the models. * A good technical note will be profusely illustrated and rich with analogies. there is no way to capture or communicate these with the models. >I'm concerned with doing the work more than once (especially when I can then >throw one of those away) as well as if I decide to keep the technical >notes around then there's the problem of having to maintain identical >information in more than one place. Though I do manage them with the rest of project's documents, I don't attempt to keep them up to date past the point where they have served their purpose of developing a good set of models. Perhaps not ideal, but quite practical. > >Second, given the built-in ambiguities etc. of natural language, it's >usually hard to tell when time spent on technical notes is really adding >value (i.e., increasing our knowledge) vs. just re-packaging the knowledge >we already have. It's simply too easy for the non-value-added time to >take a huge bite out of the project without anyone really noticing until >it's too late. This can happen, I'll admit, and needs to be guarded against. When good references already exist (requirements, product definitions, etc.) they should be referenced, not restated. This should not become "busy work". > >I prefer to reserve technical notes for those critical items that we >just can't seem to find a way to express in the existing models. A few more reasons that I find technical notes useful: * They an ideal place to map requirements to the models when you're doing formal requirements tracing. * They give you something specific and scoped against which to review the models. > >> 2. Build preliminary OIM (no object, attribute, relationship descriptions). >> Hold a walkthrough with subsystem team members. Assess the degree to which >> model captures requirements documented in technical note. If new >>requirements are uncovered, include revision of technical note into the >> next step. > >My concern with this step is that it appears to violate the "... and have >a well-defined completion criteria" guideline. With no descriptions, it's >really difficult to see that all the normalization has been done properly >and that what I interpret X to mean is that same thing that everyone else >interprets X to mean. Maybe I'm being a bit too pessimistic here, but I >see too much room for waffling in this. The completion criteria is, I believe, quite well defined: a syntactically correct graphic -- I'm assuming a modeling tool that provdes this. But yes, without descriptions, only those familar with the subject matter can be useful in offering feedback. What I failed to explain here is that the walkthrough is conducted as a presentation/explanation given by the model developers, not a review. The objective is to see if things are "on track", not that all the t's are crossed and i's dotted. This has proven extremely useful with engineers new to the modeling. It allows them to get early feedback and avoid inadvertantly "going down a rathole". > >I can see holding an _informal_ meeting as a sort of mid-course correction >kind of thing, but I'd personally be wary of basing project management >status on such an informal thing. I disagree on not basing project management status on this step. I think that verifying (or not) that the abstractions are successfully capturing the requirements and that the full scope of the requirements is being addressed is a significant milestone (or not). No, the work's not done, but I am (or not) making progress. This is important, if somewhat soft, tracking information to me. > >> 3. Revise per walkthrough and complete OIM. Distribute for formal review. >> >> 4. Hold formal review of OIM. Again, review against the requirements >> captured in the technical note and note any newly discovered or changed >> requirements. >> >> 5. Revise per review. >> >> > >> >Does anyone use it? >> >> Yes, I use it whenever I'm helping my clients identify and schedule >> the work on their S-M projects. > >Ditto on the yes, but be aware that the project matrix is a useful tool >whenever the work can be broken down in two dimensions (a series of >consistent process steps applied to a set of product components). In >other words, it doesn't have to be a S-M project. It doesn't even have >to be a software project. Indeed. > >> >Would anyone care to share their styles/examples in generating >> >requirements documents and software specifications? >> >> This is usually client driven for me. > >I'll reserve comment on this one as I think there are actually some >interesting philisophical discussions about a) "just what in the heck >is a 'requirement' anyway?", b) to what extent should 'requirements' >be written in natural language documents vs. 'formal' specifications, >and things like that. But more often than not, these are really driven >by the organizational policies of the place that's paying for the >work (e.g., "we want to see documents X, Y, and Z, and this is what >each of them is supposed to say"). > >Off on a tangential, but still marginally related, topic: the packaging >of technical content into deliverable documents seems to be more driven >by the vagaries of the configuration management/change management >policies and procedures than anything else. But that's another topic for >another time... > >> >Most PM discussions revolve around philosophy and provide little >> >practical knowledge. >> > >> >As usual any refs (books/mags/web sites) on the above are, indeed, >> >appreciated! > >My first exposure to project management was in a book written by >Meilir Page-Jones. I think it was called "Practical Project Management: >Restoring Quality to Projects and Systems" or something like that. I >thought it was very enlightening for a techie like me. Yes, Meiler does have a way with words and reality that make the material both entertaining and helpful. Cheers - Michael >OTOH, it's >fairly basic project management stuff and there's plenty more >advanced material that's likely more appropriate for an in-the-trenches >project manager (e.g., all of Barry Boehm's risk-based stuff). > >Regards, > >-- steve > "Dana Simonson" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to lahman ... > ... There is no "OOA type" in the OOA; the only thing that is defined is the data domain, which is described descriptively. The Range abstraction is closest because the specification of units is allowed, but even that is descriptive. Reference: "Data Types in OOA", Sally Shlaer and Steve Mellor, http://www.projtech.com excerpt: "The base data types defined for OOA are: enumerated, boolean, extended boolean, symbolic, numeric, ordinal, time, duration, arbitrary." <<<>>><<<>>><<<>>><<<>>><<<>>><<<>>><<<>>><<<>>> Dana Simonson Engineering Section Manager Transcrypt Operations - Waseca dsimonson@transcrypt.com www.transcrypt.com lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lee... > Yes, I agree entirely on the task size, but there is a limit to the amount of > time by which you can reduce a task duration by increasing the number of people > working on that task. In order to reach this 1-3 week size, sometimes further > partitioning the task is your only option. More on this later... This was the reason that we settled on effort as the metric -- it is independent of the number of resources applied. While there is a project management advantage in learning about schedule problems earlier, I think the primary value lies in producing more reliable estimates. When large tasks are partitioned the sum of the parts seems to be invariably larger than the whole until you get down to the level of a few weeks. > Hmmm... yes and no. If my completion criteria for the OIM is correct > syntax and accurate (relative to requirements) semantics, I don't quite > have a checklist against which to review, especially for the semantics. > Am I missing something here or am I putting too much content into the > "checlist"? I think this point would be more appropriate for the OCM, which most CASE tools can derive. As you point out below, the OIM is liable to have a lot of additional information in technical descriptions. I don't think work should progress to state models until the OIM has been reviewed and _justified_. > I disagree on the models replacing the technical notes for a few reasons: > > * The model does not capture the thinking and trade-offs that went into > formulating the conceptual model that the OOA captures. Understanding this > is frequently important for maintaining and revising the models. > > * A good technical note will be profusely illustrated and rich with analogies. > there is no way to capture or communicate these with the models. I am in firm agreement here. (Though I might quibble about the format of a Technical Note -- it _might_ take the form of extensive descriptions on relationships, objects, processes, etc.) In particular, on the DC and OIM one needs to document the the Why of the way the model was defined. I have looked at my own models after some time has passed and had to do a lot of memory draining to recall what specific aspect of the problem led to the definition of, say, a particular relationship. As a generalization I think that the simpler the notation the more justification there is for supplementary information -- the name on a DC oval simple does not convey the basis for the design team's fist fights on the way to determining what domains should be defined. In a practical sense, such documentation surely makes life much easier for reviewers. > The completion criteria is, I believe, quite well defined: a syntactically > correct graphic -- I'm assuming a modeling tool that provdes this. But > yes, without descriptions, only those familar with the subject matter > can be useful in offering feedback. What I failed to explain here > is that the walkthrough is conducted as a presentation/explanation given by > the model developers, not a review. The objective is to see if things are > "on track", not that all the t's are crossed and i's dotted. This has > proven extremely useful with engineers new to the modeling. It allows them > to get early feedback and avoid inadvertantly "going down a rathole". I would argue that at some point the work product needs to be reviewed in detail. What you describe is what we would regard as a preliminary walkthrough. A preliminary walkthrough of an OIM, for example, might address issues like the whether the objects were at the appropriate level of abstraction for the domain, but not whether attribute lists were complete. However, we also have a detailed work product review that justifies every bubble and arrow. The signoff for this review is the exit criteria for the development phase. For complex documents it is sometimes useful to split the final, detailed review into a couple of reviews, each with limited scope and, possibly, starting prior to the completion of the work product. However, we usually only do this for things like Functional Specifications because the S-M work products are already highly focused. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com "Michael M. Lee" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lahman... At 11:27 AM 8/26/98 -0400, you wrote: >lahman writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >Responding to Lee... > >> Yes, I agree entirely on the task size, but there is a limit to the amount of >> time by which you can reduce a task duration by increasing the number of people >> working on that task. In order to reach this 1-3 week size, sometimes further >> partitioning the task is your only option. More on this later... > >This was the reason that we settled on effort as the metric -- it is independent of >the number of resources applied. While there is a project management advantage in >learning about schedule problems earlier, I think the primary value lies in producing >more reliable estimates. When large tasks are partitioned the sum of the parts seems >to be invariably larger than the whole until you get down to the level of a few weeks. Yes, having the sum of the parts add up to more than the whole can be a very real problem. As with most project management issues, I believe there isn't a right answer as much as there is a right balance between conflicting objectives: in this case a manageable granularity and the estimate explosion. > >> Hmmm... yes and no. If my completion criteria for the OIM is correct >> syntax and accurate (relative to requirements) semantics, I don't quite >> have a checklist against which to review, especially for the semantics. >> Am I missing something here or am I putting too much content into the >> "checlist"? > >I think this point would be more appropriate for the OCM, which most CASE tools can >derive. As you point out below, the OIM is liable to have a lot of additional >information in technical descriptions. I don't think work should progress to state >models until the OIM has been reviewed and _justified_. I absolutely agree on this point. > >> I disagree on the models replacing the technical notes for a few reasons: >> >> * The model does not capture the thinking and trade-offs that went into >> formulating the conceptual model that the OOA captures. Understanding this >> is frequently important for maintaining and revising the models. >> >> * A good technical note will be profusely illustrated and rich with analogies. >> there is no way to capture or communicate these with the models. > >I am in firm agreement here. (Though I might quibble about the format of a Technical >Note -- it _might_ take the form of extensive descriptions on relationships, objects, >processes, etc.) In particular, on the DC and OIM one needs to document the the Why >of the way the model was defined. I have looked at my own models after some time has >passed and had to do a lot of memory draining to recall what specific aspect of the >problem led to the definition of, say, a particular relationship. As a >generalization I think that the simpler the notation the more justification there is >for supplementary information -- the name on a DC oval simple does not convey the >basis for the design team's fist fights on the way to determining what domains should >be defined. > >In a practical sense, such documentation surely makes life much easier for reviewers. I think you've made some very good and practical observations and suggestions above. > >> The completion criteria is, I believe, quite well defined: a syntactically >> correct graphic -- I'm assuming a modeling tool that provdes this. But >> yes, without descriptions, only those familar with the subject matter >> can be useful in offering feedback. What I failed to explain here >> is that the walkthrough is conducted as a presentation/explanation given by >> the model developers, not a review. The objective is to see if things are >> "on track", not that all the t's are crossed and i's dotted. This has >> proven extremely useful with engineers new to the modeling. It allows them >> to get early feedback and avoid inadvertantly "going down a rathole". > >I would argue that at some point the work product needs to be reviewed in detail. Yes, definitely. This was the next step in my sub-OIM tasks, a complete model (with descriptions) and a formal review. >What you describe is what we would regard as a preliminary walkthrough. A preliminary >walkthrough of an OIM, for example, might address issues like the whether the objects >were at the appropriate level of abstraction for the domain, but not whether attribute >lists were complete. However, we also have a detailed work product review that >justifies every bubble and arrow. The signoff for this review is the exit criteria >for the development phase. Agreed. > >For complex documents it is sometimes useful to split the final, detailed review into >a couple of reviews, each with limited scope and, possibly, starting prior to the >completion of the work product. However, we usually only do this for things like >Functional Specifications because the S-M work products are already highly focused. Yes, but I still think this is a good idea at times. Some larger OIM's can be reviewed by clusters, and some complex OCM's by scenarios. - Michael -------------------------------- M O D E L I N T E G R A T I O N Model Based Software Development 500 Botany Court Foster City, CA 94404 mike@modelint.com 650-341-2544(v) 650-571-8483(f) --------------------------------- "Stephen R. Tockey" writes to shlaer-mellor-users: -------------------------------------------------------------------- Michael M. Lee" wrote > Yes, I agree entirely on the task size, but there is a limit to the amount of > time by which you can reduce a task duration by increasing the number of people > working on that task. According to my calculations, the maximum schedule compression is to approximately 37% of the having-one-person-do-it time. I wrote up a paper a while back on the effect of team size on productivity and project cost. I'm looking at publishing the paper, but it won't be out for a while. That paper gives the justification for the 37% limit. Anyway, I agree that there's a limit. > In order to reach this 1-3 week size, sometimes further > partitioning the task is your only option. My suggestion is to break the (sub-) domain into smaller pieces rather than making the lifecycle steps more fine-grained. I think the steps are fine- grained enough and any more divisions can get you into trouble. > >I also want to echo "... and have a well defined completion criteria". > >But I'd also add that > > a) those completion criteria are the basis for an inspection/ > > review/walkthrough (i.e., the criteria are the inspection > > checklist), and > > Hmmm... yes and no. If my completion criteria for the OIM is correct > syntax and accurate (relative to requirements) semantics, I don't quite > have a checklist against which to review, especially for the semantics. > Am I missing something here or am I putting too much content into the > "checlist"? You might be reading too much into "checklist". Along the lines of "verification as separate from validation" (aka "do it right vs. do the right thing"), the checklist is verification in nature ("do it right"). The OIM criteria should focus on what it means to be a well-formed OIM. How do I distinguish a proper one from an improper one? The issue of whether this particular OIM actually meets the customer's needs ("do the right thing") is certainly an issue to be addressed in the inspection/review/walkthrough, but it is based on comparing back to the upstream deliverables (e.g., a written requirements document). This part is not based on the inspection checklist. > I disagree on the models replacing the technical notes for a few reasons: > > * The model does not capture the thinking and trade-offs that went into > formulating the conceptual model that the OOA captures. Understanding this > is frequently important for maintaining and revising the models. > > * A good technical note will be profusely illustrated and rich with analogies. > there is no way to capture or communicate these with the models. I'm certainly not saying that technotes aren't useful. Surely your points are the prime motivations for technotes. My concern is that technotes are not _always_ useful, ESPECIALLY when they duplicate information that can be better expressed in the models. So arbitrarilly forcing them to be a step in the lifecycle could do more harm than good. -- steve Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > Responding to Whipp... > > As soon as you allow any implementation considerations to creep into > > the model then analysts become designers because they must worry about > > the execution efficiency of their models. It is essential that the > > analyist is able to separate the functional information from the > > performance information; even when both must be specified. > > But haven't you been contending for awhile now that implementation > considerations are sometimes unavoidable in the OOA? Or are you > simply arguing that performance related implementation considerations > need not appear in the OOA? I'm not sure that I have been arguing that; but it is true that implication considerations do frequently invade model building due to assumed performance considerations. Saying that its wrong doesn't change the facts. There are actually two concepts here that are frequently mixed: performance issues and implementation issues. Coloration (or tagging, or adornment, or whatever name you use) is used to add information to a model that is used in its reification. There are different ways that these can be used: 1. They can specify implementation information. e.g. "implement this objects' instances in an array with 12 elements" 2. They can provide performance information. e.g. "response time to this incoming wormhole request to the first response wormhole must be less than 3ms" 3. They can provide information about the expected dynamics of the system. e.g. "in normal operation expect only 10 instances; under worst case conditions the maximum will be 12" Implementation information is related to a specific implementation and is therefore closely related to the chosen architecture. Performance (and dynamic behaviour) information is a property of the system itself and is independent of the architecture. I would always advise against the use of type-1 information in an application domain model (except during early development phases) unless it is properly justified. However, it can sometimes be a bit unclear just what type of information is being specified. If I say "all instances of the object must fit into 4kB memory" then it seems like performance information; but if the real motive is "store instances in this 4kB on-chip SRAM" then it is implementation information. I think the distinction is that implementation-information used directly to populate the architectural model during translation whereas the other types of information must be processed by the translator; and thus only indirectly populate the architecture, SM could directly support performance/dynamic information as overlays on domains in a system; wheras implemenation information should be only indirectly supported on such overlays. (I don't beleive that this information should be given within domains themselves because this would compromise reusability). Given that there are different types of information, is it right to use a single mechanism (coloration) to specify it? Dave. -- David Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > Responding to Whipp... > > [difference: elaboration vs translation] > But if one chooses the > concurrent approach to translation, the paths are very different and that this > constitutes a significant difference in the approaches. [Which I think was > agreeing with a point you made earlier.] There is definitely a more obvious difference when you use the more powerful forms of translation. However, does it really matter? Even if translation is nothing more than writing scripts that do elaboration; then this is an improvement on manual elaboration (Indeed, it is probably better than manual translation). The point of investigating translation is that, if you accept the premise that scripting is good (a Microsoft spokesperson recently stated that it is not) then meerly automating the elaboration is not the most sensible design for such scripts. However, in a way, I don't care what the outcome of elaboration vs elaboration is for the general software industry. In the work I do, translation is a very effective tool which works for me. I don't use CASE tools (at the moment) so the standardisation issues aren't to important. I'll leave marketting and evangelism to those who do care. And a final comment: we have kept using the word "formalism" in this thread when discussing translation steps. I think that it is more meaningful to stick to the PT term: architectural model. The use of "formalism" implies a greater mathematical rigor than is frequently in evidence. Dave -- David Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Simonson... > Reference: "Data Types in OOA", Sally Shlaer and Steve Mellor, http://www.projtech.com > > excerpt: "The base data types defined for OOA are: enumerated, boolean, extended boolean, symbolic, numeric, ordinal, time, duration, arbitrary." True, but this doesn't help the problem of dealing with units (dimensions). Those are only described descriptively in the Range specification of an attribute's data domain. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lee... > Yes, but I still think this is a good idea at times. Some larger OIM's can be reviewed > by clusters, and some complex OCM's by scenarios. Good point. I am biased by the fact that our applications generally have a large number of domains with modest object counts. BTW, in the future we will probably be reviewing piecemeal anyway. We have a lot of pressure to develop features in a release individually. This is because we have a lot of contracts into programs with fixed dates where it is desirable to drop features rather than miss a date. Since our schedules are _always_ very optimistic for reasons beyond our control, the need to drop features is common. But this tends to be inefficient if one designs and builds most of the infrastructure prior to knowing a feature will be axed. Therefore we are moving towards a sort of use case approach to building our applications so that we only build those portions that are necessary for a particular feature set. (Another reason why we have lots of domains.) These would be reviewed as each incremental feature is added. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > > But haven't you been contending for awhile now that implementation > > considerations are sometimes unavoidable in the OOA? > > I'm not sure that I have been arguing that; but it is true that > implication considerations do frequently invade model building > due to assumed performance considerations. Saying that its > wrong doesn't change the facts. > > There are actually two concepts here that are frequently mixed: > performance issues and implementation issues. > > Coloration (or tagging, or adornment, or whatever name you use) > is used to add information to a model that is used in its > reification. There are different ways that these can be used: Aha, I finally see the disconnect after all these years! I don't regard coloration as part of the OOA models. I see it as Round 1 of the RD -- effectively an RD overlay superimposed on the OOA model. > Given that there are different types of information, is it > right to use a single mechanism (coloration) to specify it? Regardless of where the coloration goes, I think I would vote for it being described with a different formalism. If for no other reason than to underscore that they are different information that may or may not move with an application if it is ported to another environment. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > However, does it really matter? Even if translation is nothing more than > writing scripts that do elaboration; then this is an improvement on > manual elaboration (Indeed, it is probably better than manual > translation). I think there is another issue: clearly defined scope for the elaboration. When you launch into your elaborative scripts, it is quite clear that you are done with the abstract solution model. When the elaborationists incrementally add detail to their models, this border becomes blurred. This is true even if separate lower level diagrams are used because referential integrity still needs to be traced and the higher level models still need place holders that may depend upon the implementation. > And a final comment: we have kept using the word "formalism" in this > thread when discussing translation steps. I think that it is more > meaningful to stick to the PT term: architectural model. The use > of "formalism" implies a greater mathematical rigor than is > frequently in evidence. I had interpreted "formalism" more narrowly -- to a particular description syntax. Thus I could see the architectural model being composed of several formalisms. Where I see the translation steps coming in is the process (translation) that operates on the architectural model. Whether these are sequential elaborations or the octopus approach, the formalisms might remain intact in the architectural model. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com "Dana Simonson" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to lahman... > Reference: "Data Types in OOA", Sally Shlaer and Steve Mellor, http://www.projtech.com > > excerpt: "The base data types defined for OOA are: enumerated, boolean, extended boolean, symbolic, numeric, ordinal, time, duration, arbitrary." >True, but this doesn't help the problem of dealing with units (dimensions). Those are only described descriptively in the Range specification of an attribute's data domain. >From the introduction to that paper: "Rule: All data elements that appear in the OOA models of a domain must be typed. ...Domain Specific data types are defined by the analyst in order to capture ideas such as power, voltage, position... ... When an analyst defines a domain-specific data type, he does so by referring to an appropriate base data type..." All we have done is provide the analyst a way to specify his domain types in terms of OOA base types that puts the information in a machine readable format. No extensions were made to the method. The method does not specify that the description must be a single text string with no restrictions on format or consistency of content. Previously, we had used the idea of Volts being converted to milliVolts... Both Volts and milliVolts are domain types. They must be specified in terms of OOA base types (lets use number with 3 places of decimal precision and number with no decimal precision respectively). The architecture does not know that these two numbers represent Volts and milliVolts nor should it ever care. If the implementation uses a integer processor, it may choose to store all numbers in a fixed point representation where all numbers are scaled to allow representation as integers. But, if it does so, it must do so such that domain space types are handled consistently. 1 Volt is 1000 milliVolts so I can have domain types of Volts and milliVolts, read one attribute, multiply by 1000 and save it to another attribute. This has to 'just work' independent of what the architecture does in its representation and manipulation of these values. <<<>>><<<>>><<<>>><<<>>><<<>>><<<>>><<<>>><<<>>> Dana Simonson Engineering Section Manager Transcrypt Operations - Waseca dsimonson@transcrypt.com www.transcrypt.com Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > > Coloration (or tagging, or adornment, or whatever name you use) > > is used to add information to a model that is used in its > > reification. There are different ways that these can be used: > > Aha, I finally see the disconnect after all these years! I don't regard > coloration as part of the OOA models. I see it as Round 1 of the RD -- > effectively an RD overlay superimposed on the OOA model. I agree with you that colorations should not be part of the OOA model. I am not sure that I agree that this is currently the case. However, Perhaps we should read my "used to add" phrase as "information that augments the OOA model." But is it part of RD? I would argue not. It is really an extention of the domain chart (bridge specification?), in that it is information that is necessary to specifiy the system. If I specify that a cash machine should supply money within 30 seconds of the user requesting it: then this is not part of any domain and nor is it part of the design process. The design process (RD) must allow this information to be propogated onto domains prior to (or as part of) their translation. It is the propogation that is "round 1 of RD", not the specification. And the propogation may convert performance/dynamic-behaviour coloration into implementation- detail coloration > > Given that there are different types of information, is it > > right to use a single mechanism (coloration) to specify it? > > Regardless of where the coloration goes, I think I would vote for > it being described with a different formalism. If for no other > reason than to underscore that they are different information > that may or may not move with an application if it is ported > to another environment. An as I pointed out, information that specifies performance or dynamic-behaviour for the application doimain will be invarient across multiple environemnt. However, information that specifies implementation details should be expected to change for different environments. That is why I prefer the former types, even though the translator has more work to do for these. Dave. -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. "Lynch, Chris D. SDX" writes to shlaer-mellor-users: -------------------------------------------------------------------- Whipp wrote: >But is it [coloration expressing performance requirements] >part of RD? I would argue not. It is really an extention >of the domain chart (bridge specification?), in that it is information >that is necessary to specifiy the system. If I specify that a cash >machine should supply money within 30 seconds of the user requesting >it: then this is not part of any domain and nor is it part of the >design process. >An as I pointed out, information that specifies performance or >dynamic-behaviour for the application doimain will be invarient >across multiple environemnt. I would argue that timing information *is* a part of the domain, whether or not SMOOA has a notation for it. To me, relegating it to a bridge or to coloration relegates important system requirements to a document normally reserved for detailed design. I like to put important timing information (important to the customer, that is) in the domain mission statement where everyone can find it- - even the bank president. This way, it's at the same level of abstraction as the domain's function and data requirements. -Chris ------------------------------------------- Chris Lynch Abbott Ambulatory Infusion Systems San Diego, Ca LYNCHCD@HPD.ABBOTT.COM ------------------------------------------- lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Simonson... > All we have done is provide the analyst a way to specify his domain types in terms of OOA base types that puts the information in a machine readable format. No extensions were made to the method. The method does not specify that the description must be a single text string with no restrictions on format or consistency of content. Note that the paper is an RD paper and it specifically states that, "Exactly how you go about defining a domain-specific data type depends on the particulars of your automation tools." My argument is that your extensions, albeit fairly trivial, are supplying a formalism for interpreting the natural language data domain description provided by the method in the OOA and they are doing this for the benefit of the translator. In doing so you are supplementing information in the OOA models -- in effect doing coloration. [In fact, you do not seem to be doing this in the OOA at all; the "machine readable format" sounds like you are preparing information that is fed directly to the translator rather than through the attribute's data domain description(?)] Relevant to how we got on this topic, I contend that those colorations _may_ allow the translator to do things, such as units conversions, that would otherwise have been manually placed in realized code. Whether this is actually the case depends on semantics, syntax, the particular application, and what the translator actually does. To make a stronger statement, I think anything one does during RD (e.g., coloration, custom script, etc.) that is specific to an application may result in realized code that was not defined in the OOA. BTW, note that the fact that the paper is an RD paper supports my view that things like units conversions should be handled by the translation rules rather than being placed in the in the OOA as explicit processes. :-) -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > I agree with you that colorations should not be part of the OOA model. Oh, drat. Now I am back to wondering where you see all that implementation necessarily creeping into the OOA. > I am not sure that I agree that this is currently the case. However, > Perhaps we should read my "used to add" phrase as "information that > augments the OOA model." > > But is it part of RD? I would argue not. It is really an extention > of the domain chart (bridge specification?), in that it is information > that is necessary to specifiy the system. If I specify that a cash > machine should supply money within 30 seconds of the user requesting > it: then this is not part of any domain and nor is it part of the > design process. > > The design process (RD) must allow this information to be > propogated onto domains prior to (or as part of) their > translation. It is the propogation that is "round 1 of RD", > not the specification. And the propogation may convert > performance/dynamic-behaviour coloration into implementation- > detail coloration I agree. I was thinking of Round 1 more on the lines of expressing the specification (e.g., marking up a hardcopy of the OOA with a highlighter) in some rigorous manner that would be useful to propagation (Round 2). > > Regardless of where the coloration goes, I think I would vote for > > it being described with a different formalism. If for no other > > reason than to underscore that they are different information > > that may or may not move with an application if it is ported > > to another environment. > There was a misspokement -- my quote should have read, "...the descriptions go, I think I would vote for them being described with different formalisms." Why can't these fancy-shmancy editors send what I meant rather than what I type? They seem to have plenty of time to criticize my spelling and grammar. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > Responding to Whipp... > > > I agree with you that colorations should not be part of the OOA model. > > Oh, drat. Now I am back to wondering where you see all that implementation > necessarily creeping into the OOA. A correct modelling position is: "Completely ignore all the performance implications of the model: just do the best possible analysis and the architecture/translator will meet all the performace requirements." Unfortunately, this is not a very politicaly viable position to hold. There are generally people around who think like programmers, not analysts; and they will look at performace problems and say: "pollute the model: the translator/architecture passes its regression tests at the moment and we don't want to change it." The fact that you don't want to change the model, for the same reason, may not help. This is especially true if the model change is simpler than the architecture/translator fix and the project is running late. However, even when the translator/architecture fix is simpler than the model fix: there are some people who view the translator as a fixed (potentially reusable, multi-project) tool that shouldn't be modified just keep the analysts happy. Dave. -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- Lynch, Chris D. SDX wrote: > I would argue that timing information *is* a part of the > domain, whether or not SMOOA has a notation for it. To me, > relegating it to a bridge or to coloration relegates > important system requirements to a document normally > reserved for detailed design. I would like to question the concept of "relegating it to a bridge". Bridges are no less important than domains. The domain chart is vastly under-rated (and under-powered). > I like to put important timing information (important to the > customer, that is) in the domain mission statement where > everyone can find it- - even the bank president. This way, > it's at the same level of abstraction as the domain's > function and data requirements. I will admit the the arguments are finely balanced. However, consider again my previous example: "The cash machine should supply the money within 30 seconds of the user's request". Now consider a simple domain chart: it has the application domain, the user-interface domain and the cash-output controller. On which domain do we put the timing requirement? You might say: "obviously its the application domain". This may seem very logical. However, I would argue that the basic timing restriction is between the time when the user hits the "GO" button (whatever that might be) and when the cash is available. I can state this without knowing anything about the domains. Furthermore, I want to be able to maintain it even through domain-replacement. These events will be visible as they travel across a bridge, somewhere. In hardware terms, we might use what is known as a pin-to-pin timing arc to describe this. For example, on an 2-input AND gate there may be two timing arcs: one for each input. The timing represents the time from the signal entering an input to the time its consequence is visible out the output. (In fact, there are 4 timing arcs, because the falling-edge and rising-edge timings are usually different). The important point here is that the timing model views the component as a black-box. We can define timings without knowing anything about the innards of the AND gate. Going back to the cash-machine, I noted that the events to which the timing relates are all represented by wormholes. I.e. they the things that are visible when the domains are viewed as black boxes. interdomain-events are visible in 3 places: the client wormhole, the server wormhole and the bridge. Does it make sense to think about timings within a domain; or should we look at the bridge? If you consider OOA96, it has only one concept of time within a domain: the delayed event. I never use these. If a delay is needed then it is _always_ a consequence of something outside the domain. Therefore I always model the delay via a wormhole. This is like the pre-OOA96 days when an (external) timer object could be visible within a domain; except that I keep the timer external. This view is reinforced by the fact that event delivery in OOA is considered to be completely non-deterministic. As long as the event-sequence rules are met, any timing is allowed. In fact, ADFD execution time is similarly unconstrained. So timings attached to events may be completely swamped by other delays introduced by other domains (possibly the architecture). Only a system- viewpoint makes sense when talking about timings. So I would like to see the system viewpoint enhanced. The domain chart (and bridges) should not be viewed as subsidiary to the domain models. A strong system model, including bridge definitions, should include timing requirements (and assumable facts) between wormholes that must be met (or can be assumed) during RD. Finally, if the timing is specified outside of the domain, then when you build a special, cheaper, version of your machine (which is slower) then you can reuse the application domain and just change the timing in the system specification. (For more information on translation using constraints on the external interface, you might want to look at hardware synthesis tools, such as Synopsys, which translate RTL circuit descriptions (a well defined abstraction) onto gate level netlists (a well defined architecture). There are plenty of books around) Dave. -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. "Michael M. Lee" writes to shlaer-mellor-users: -------------------------------------------------------------------- At 12:29 PM 8/26/98 -0500, you wrote: >"Stephen R. Tockey" writes to shlaer-mellor-users: >-------------------------------------------------------------------- > > >Michael M. Lee" wrote > >> Yes, I agree entirely on the task size, but there is a limit to the amount of >> time by which you can reduce a task duration by increasing the number of people >> working on that task. > >According to my calculations, the maximum schedule compression is to >approximately 37% of the having-one-person-do-it time. I wrote up a paper >a while back on the effect of team size on productivity and project cost. >I'm looking at publishing the paper, but it won't be out for a while. That >paper gives the justification for the 37% limit. Most interesting. Please let us know when it's published or of you're willing to release a preliminary copy. > >Anyway, I agree that there's a limit. > >> In order to reach this 1-3 week size, sometimes further >> partitioning the task is your only option. > >My suggestion is to break the (sub-) domain into smaller pieces rather than >making the lifecycle steps more fine-grained. I think the steps are fine- >grained enough and any more divisions can get you into trouble. > >> >I also want to echo "... and have a well defined completion criteria". >> >But I'd also add that >> > a) those completion criteria are the basis for an inspection/ >> > review/walkthrough (i.e., the criteria are the inspection >> > checklist), and >> >> Hmmm... yes and no. If my completion criteria for the OIM is correct >> syntax and accurate (relative to requirements) semantics, I don't quite >> have a checklist against which to review, especially for the semantics. >> Am I missing something here or am I putting too much content into the >> "checlist"? > >You might be reading too much into "checklist". Along the lines of >"verification as separate from validation" (aka "do it right vs. do the >right thing"), the checklist is verification in nature ("do it right"). >The OIM criteria should focus on what it means to be a well-formed OIM. >How do I distinguish a proper one from an improper one? > >The issue of whether this particular OIM actually meets the customer's >needs ("do the right thing") is certainly an issue to be addressed in >the inspection/review/walkthrough, but it is based on comparing back to >the upstream deliverables (e.g., a written requirements document). This >part is not based on the inspection checklist. OK, that all makes sense. Thanks for your explanation. > >> I disagree on the models replacing the technical notes for a few reasons: >> >> * The model does not capture the thinking and trade-offs that went into >> formulating the conceptual model that the OOA captures. Understanding this >> is frequently important for maintaining and revising the models. >> >> * A good technical note will be profusely illustrated and rich with analogies. >> there is no way to capture or communicate these with the models. > >I'm certainly not saying that technotes aren't useful. Surely your points >are the prime motivations for technotes. My concern is that technotes are >not _always_ useful, ESPECIALLY when they duplicate information that can be >better expressed in the models. So arbitrarilly forcing them to be a step >in the lifecycle could do more harm than good. > >-- steve > 'archive.9809' -- lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp.. > > > I agree with you that colorations should not be part of the OOA model. > > > > Oh, drat. Now I am back to wondering where you see all that implementation > > necessarily creeping into the OOA. > > A correct modelling position is: "Completely ignore all the performance > implications of the model: just do the best possible analysis and the > architecture/translator will meet all the performace requirements." > > Unfortunately, this is not a very politicaly viable position to > hold. There are generally people around who think like programmers, > not analysts; and they will look at performace problems and > say: "pollute the model: the translator/architecture passes its > regression tests at the moment and we don't want to change it." > > The fact that you don't want to change the model, for the same > reason, may not help. This is especially true if the model change > is simpler than the architecture/translator fix and the project > is running late. However, even when the translator/architecture > fix is simpler than the model fix: there are some people who view > the translator as a fixed (potentially reusable, multi-project) > tool that shouldn't be modified just keep the analysts happy. While I agree with all this, it doesn't resolve my quandary. I *thought* that in the past you have asserted that some implementation information in the OOA is unavoidable (as opposed to inevitable). A couple of messages ago you gave examples involving coloration, so I assumed we simple differed on whether coloration is part of the OOA model. But then you said you agreed that it wasn't. So that put me back wondering what sort of implementation information is unavoidably in the OOA that isn't coloration. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Allen Theobald writes to shlaer-mellor-users: -------------------------------------------------------------------- Hello, I would like some input on doing software engineering with ooa/rd. Specifically where does each aspect of S-M, o System Level - Domain Chart - Project Matrix o Domain Level - Subsystem Relationship Model - Subsystem Communication Model - Subsystem Access Model o Subsystem Level - Information Models - Object Communication Models - Object Access Models o Object Level - State Transition Diagrams - Action Data Flow Diagrams o Design Phase fit into the SE process? Since S-M is concerned with OOA, I would guess that all of the above would end up in what is traditionally know as a Software Requirements Specification (SRS)? What if I wanted to break it up into System Level Requirements and Detailed Requirements? Where would the line be drawn? In S-M OOA/RD where does analysis end and design begin? I'm familiar with IEEE standards and DoD standards with regard to Software Reguirements Specifications. How does S-M OOA/RD fit in with these quidelines? If it doesn't what would be a "good" outline for constructing such a document? Kind Regards, Allen Theobald tristan.pye@aeroint.com writes to shlaer-mellor-users: -------------------------------------------------------------------- I know I said 'be gentle', but no replies at all...? Thanks in advance (I hope!) Tristan. (again) ---------- From: Tristan Pye[SMTP:tristan.pye@aeroint.com] Sent: 16 July 1998 17:10 To: 'Shlaer-Mellor UG' Subject: An information modelling question... Hello all, I'm having trouble modelling the following type of construct. A<----->>B ^ ^ ^ ^ | | | | v v C<----->>D That all looks simple enough, but unfortunately B must be related to the same C via both A and D. A and D have no common identifiers, so there are no collapsed referentials, which scuppers any attempt to enforce it that way. Is it possible to enforce this on the OIM, or does B have to be intelligent enough in its action language to stop it referencing two different Cs? Any help would be appreciated (but please be gentle - I'm new to the modelling game!) Thanks, Tristan. -------------------------------- Tristan Pye Aerosystems International www.aeroint.com +44 (0)1935 443103 tristan.pye@aeroint.com Thursday 16 July 1998, 4:55 pm -------------------------------- peterf@pathfindersol.com (Peter J. Fontana) writes to shlaer-mellor-users: -------------------------------------------------------------------- At 10:47 AM 9/2/98 +0100, shlaer-mellor-users@projtech.com wrote: >tristan.pye@aeroint.com writes to shlaer-mellor-users: >-------------------------------------------------------------------- >I'm having trouble modelling the following type of construct. > I'll number the relationships: R4 A<----->>B ^ ^ ^ ^ R2 | | R3 | | v v C<----->>D R1 >That all looks simple enough, but unfortunately B must be related to >the same C via both A and D. A and D have no common identifiers, so >there are no collapsed referentials, which scuppers any attempt to >enforce it that way. > >Is it possible to enforce this on the OIM, or does B have to be >intelligent enough in its action language to stop it referencing two different Cs? If you refer to the "OOA96 Report" section 3.3 (p. 16) you'll see a description of the Composed Relationship. Perhaps R1 or R2 could be composed: R1=R2+R3+R4 or R2=R1+R3+R4 Can this work for you? _______________________________________________________ Pathfinder Solutions Inc. www.pathfindersol.com | 888-OOA-PATH | | effective solutions for software engineering challenges| | Peter Fontana voice: +01 508-384-1392 | peterf@pathfindersol.com fax: +01 508-384-7906 | _______________________________________________________| tristan.pye@aeroint.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Thanks for forwarding the replies, > What are you talking about? There were several replies. > Pay attention! How come none of these ever found their way to me - (I studiously keep everything that appears - I got loads of stuff on SM to C++ and SM Engineers around the same time that I originally posted the question - I'm sure I'd have noticed someone answering mine!)? I can only see the group via e-mail - is there any way messages can get put on the usergroup without being emailed around? >> Neil Lang wrote: >> the OIM fragment above models >> a loop of dependent relationships, and needs to be formalized to >> capture the dependency of the relationships in the loop. In the >> OOA96 report Sally and I described three ways to do so: >> 1. constrained referentials >> 2. collapsed (or multiple) referentials >> 3. composed relationship How do I get hold of a copy of the OOA96 report? What does 'composed relationship' mean? I can't find it in any of our books here... (told you I was new...) I think I can make a guess at a 'constrained referential'. I'm still in the process of digesting HS Lahman's (Do you have a first name?) epistle on the subject! Thanks guys! Tristan P.S. Sorry to bring the tone down... recent discussions have been very interesting, but mostly way over my head - I start thinking of Star Trek every time wormholes are mentioned! lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Theobald... > I would like some input on doing software engineering with ooa/rd. > Specifically where does each aspect of S-M, Unfortunately my views in this particular area are probably not exactly mainstream S-M, so regard them with suitable prudence. For starters, I don't think any of these things are related to gathering requirements. I regard them all as various phases of software design. In our shop we do not start OOA until a detailed Functional Specification (a description of the software as a black box from the user's view) is available (two exceptions below). In our development process requirements are gathered as a prerequisite to project planning and, hence, to the Functional Specification. Typically the Domain Chart is done as part of the Functional Specification because it is an easy way to identify major system components that the end user might also identify. We also do the Information Model in parallel to the Functional Specification because object counts are the primary basis for estimating our OO projects and we have to commit to a schedule when the Functional Spec is delivered. [The apparent catch-22 is not real because the Functional Spec also contains non-software information and that is worked on after the functionality has been described as the Information Models are being worked up.] > o System Level > - Domain Chart > - Project Matrix > o Domain Level > - Subsystem Relationship Model > - Subsystem Communication Model > - Subsystem Access Model We really don't use subsystems. Our domains tend to be smaller than most people's (10-40 objects) and until recently our CASE tool didn't support them except superficially. > o Subsystem Level > - Information Models > - Object Communication Models > - Object Access Models > o Object Level > - State Transition Diagrams > - Action Data Flow Diagrams These are all done, except IM, during what we regard as the Software Design phase. We also do model simulation, develop the architecture design, formalize bridges, unit tests for realized code, and whatnot in this development phase. > o Design Phase I assume you mean by this Recursive Design or translation. We regard this as a separate Design Implementation phase. All of the translation is done here. The integration testing is done here for the whole system but without hardware. A separate phase, Design Verification, is done as an integration test with hardware. When this is complete the system is released to Product Test. > fit into the SE process? Since S-M is concerned with OOA, I would > guess that all of the above would end up in what is traditionally know > as a Software Requirements Specification (SRS)? As I indicated above, I don't think any of this has anything to do with gathering requirements. Our Statement of Requirements (SOR -- different strokes for different folks) is completed long before we even think about doing the OOA. > In S-M OOA/RD where does analysis end and design begin? In the S-M terminology, OOA is analysis and RD is design. In what I believe (always touchy since the words are so overloaded) is the mainstream SE view, it is all design. By the time you start an OOA you should have a pretty good idea what it is you need to build. Given our approach where we start OOA from a detailed black box description, one could argue that the OOA is an analysis of the way the problem is expressed in software. But I think that is a stretch; we are defining the software solution, which I regard as design. > I'm familiar with IEEE standards and DoD standards with regard to > Software Reguirements Specifications. How does S-M OOA/RD fit in with > these quidelines? I think that requirements tracking is as much of a problem for S-M as it is for any other OO bubbles & arrows based specification. The idea of requirements tracking dates from an earlier era based upon functional decomposition. We cop out on this entirely by simply specifying what test in the Product Test suite will test each requirement. (This is done as part of project planning and the Product Test group goes off an develops those tests in parallel from the SOR and the Functional Specification.) Since we are strictly a COTS vendor to DoD we don't have to worry too much about their standards. So I'll leave it to other S-M users who are primes to provide more reliable information. We have also built our development process incrementally over a long period of time and in doing so we haven't paid much attention to IEEE standards. Because software is only a component of the system, the division has its own development process that we have to map into as well. So I can't provide much insight here either. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Pye... > I know I said 'be gentle', but no replies at all...? I never got the message. Did you get the echo from SMUG on your original post? > I'm having trouble modelling the following type of construct. > > A<----->>B > ^ ^ > ^ ^ > | | > | | > v v > C<----->>D > > That all looks simple enough, but unfortunately B must be related to > the same C via both A and D. A and D have no common identifiers, so > there are no collapsed referentials, which scuppers any attempt to > enforce it that way. > > Is it possible to enforce this on the OIM, or does B have to be > intelligent enough in its action language to stop it referencing two different Cs? I would suggest picking up the OOA96 paper from the PT web -- it devotes a whole section to this issue. One solution is to "compose" the relationship. For instance, you could indicate that the B/D relationship is composed from the B/A + A/C + C/D relationship. If the relationships are identified R1, R2, R3, and R4 going from A->C->D->B->A, the B/D relationships would then be labeled R3 = R1 + R2 + R3. Another solution is to identify the dependency of the loop explicitly with the referential attribute. In the object with the most instances, tag the referential attribute with a "c" to indicate that it is dependent on the other loop relationship for that object. I happen to _really like_ collapsed referentials. At the expense of tedious compound identifiers that eliminate the need for resolve loop issues entirely. Therefore my advise would be to revisit the identification scheme to see if they might be used. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Carolyn Duby writes to shlaer-mellor-users: -------------------------------------------------------------------- tristan.pye@aeroint.com wrote: > How do I get hold of a copy of the OOA96 report? You can download the OOA96 report in PDF format. Go to http://www.projtech.com and look in the lower right frame near the bottom. There is a link that will allow you to download it. > > What does 'composed relationship' mean? I can't find it in any of our books here... (told you I was new...) I think I can make a guess at a 'constrained refere > > I'm still in the process of digesting HS Lahman's (Do you have a first name?) epistle on the subject! Look on page 26-28 of Object Lifecyles: Modeling the World in States by Sally Shlaer and Steve Mellor. -- ________________________________________________________ Pathfinder Solutions Inc. www.pathfindersol.com | 888-OOA-PATH | | effective solutions for software engineering challenges | | Carolyn Duby voice: +01 508-384-1392 | carolynd@pathfindersol.com fax: +01 508-384-7906 | ________________________________________________________| "Leslie Munday" writes to shlaer-mellor-users: -------------------------------------------------------------------- > In S-M OOA/RD where does analysis end and design begin? In the S-M terminology, OOA is analysis and RD is design. In what I believe (always touchy since the words are so overloaded) is the mainstream SE view, it is all design. By the time you start an OOA you should have a pretty good idea what it is you need to build. Given our approach where we start OOA from a detailed black box description, one could argue that the OOA is an analysis of the way the problem is expressed in software. But I think that is a stretch; we are defining = the software solution, which I regard as design. >>> I would ask "Why do you care?". It's impossible to completely separate design from analysis. As soon you = write that first requirement, or create a section heading or discover = your first object, you have effectively made a design decision. What's more important is to define review points, whereby a development = team can say 'at this point we stop and review what's been done to = date'. IMOHO of course, Leslie Munday. Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > Responding to Whipp.. > [...] > While I agree with all this, it doesn't resolve my quandary. > > I *thought* that in the past you have asserted that some implementation > information in the OOA is unavoidable (as opposed to inevitable). A couple of > messages ago you gave examples involving coloration, so I assumed we simple > differed on whether coloration is part of the OOA model. But then you said you > agreed that it wasn't. So that put me back wondering what sort of implementation > information is unavoidably in the OOA that isn't coloration. I have probably been confusing you you saying "unavoidable" instead of "inevitable". I do happen to believe in the concept of an implemenation-free model. The forces againt it seem to be political and historical. However, in the past, *you* have asserted that OOA is not implementation-free. You have pointed to depth-first vs breadth-first iterations to support your argument. I have argued againt this. Do you beleive (agree) that, in an "ideal" environment, the structure of a pollution-free OOA model does not influence the performance of the implementation. (I use "ideal" in the strict mathematical sense). Dave. -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. peterf@pathfindersol.com (Peter J. Fontana) writes to shlaer-mellor-users: -------------------------------------------------------------------- At 10:58 AM 9/2/98 -0700, Leslie Munday wrote: > >> In S-M OOA/RD where does analysis end and design begin? > >In the S-M terminology, OOA is analysis and RD is design. In what I >believe (always touchy since the words are so overloaded) is the >mainstream SE view, it is all design. By the time you start an OOA you >should have a pretty good idea what it is you need to build. Given our >approach where we start OOA from a detailed black box description, one >could argue that the OOA is an analysis of the way the problem is >expressed in software. But I think that is a stretch; we are defining the >software solution, which I regard as design. I'll bet you can get 10 different definitions for "Analysis" and "Design" from 5 different people - and here's one of mine: Analysis - specification of a problem solution in terms of the problem. Design - a strategy for, and elements supporting, the realization an implementation (deliverable executable thingies) of a problem solution. In OOA/RD, this typically is a mapping of Analysis to Implementation. Of course, I must highlight an important fact: one of the most powerful aspects of OOA/RD is that Analysis and Design are completely and fundamentally distinct. _______________________________________________________ Pathfinder Solutions Inc. www.pathfindersol.com | 888-OOA-PATH | | effective solutions for software engineering challenges| | Peter Fontana voice: +01 508-384-1392 | peterf@pathfindersol.com fax: +01 508-384-7906 | _______________________________________________________| "Paul Higham" writes to shlaer-mellor-users: -------------------------------------------------------------------- According to the theory, analysis never stops and design starts, by definition, when you start analyzing the Software Architecture using OOA. Thus design is a special case of analysis. If we imagine the Domain Chart as a recursively-defined data structure then the process of system analysis starts with the application domain then recursively analyses each of its server domains until the recursion is terminated at the Software Architecture. Thus design terminates the analysis. So why then is it not called "recursive analysis"?!? Incidentally, this elegant picture's preservation is the reason that I would like to see the Domain Chart be constrained to be a directed acyclic graph with a unique application domain, but that's another thread . . . I believe the important thing to keep in mind is clean domain separation. If you are analysing a set of objects with a consistent level of abstraction then you are thinking only of the things that you should be thinking about. At some point you are going to have to think about implementation, in which case the objects you will be thinking about will be processes and communication between them, movement and storage of attribute data, and other implementation policies. At this point you will be doing design, but as long as it's clear what you are doing and what you are doing it with, it hardly matters what you call it - "A rose by any other name would smell as sweet." (unless of course it's Rational :) <> paul <> In message "Re: (SMU) OOA/RD and Software Engineering", lmunday@gmswireless.com writes: > >> In S-M OOA/RD where does analysis end and design begin? > >In the S-M terminology, OOA is analysis and RD is design. In what I >believe (always touchy since the words are so overloaded) is the >mainstream SE view, it is all design. By the time you start an OOA you >should have a pretty good idea what it is you need to build. Given our >approach where we start OOA from a detailed black box description, one >could argue that the OOA is an analysis of the way the problem is >expressed in software. But I think that is a stretch; we are defining the >software solution, which I regard as design. > >>>> > >I would ask "Why do you care?". > >It's impossible to completely separate design from analysis. As soon you write that first requirement, or create a section heading or discover your first object, you have effectively made a design decision. > >What's more important is to define review points, whereby a development team can say 'at this point we stop and review what's been done to date'. > >IMOHO of course, > >Leslie Munday. > > lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Pye... > How do I get hold of a copy of the OOA96 report? It's on the PT web (Projtech.com) somewhere -- they have a couple of sections devoted to publications. The official name is "Shlaer-Mellor Method: The OOA96 Report" > What does 'composed relationship' mean? I can't find it in any of our books here... (told you I was new...) I think I can make a guess at a 'constrained referential'. The term merely refers to a notational technique. The R3 = R1 + R2 + R4 example I gave is a composed relationship: the R3 leg is "composed" from the alternative path R1 -> R2 -> R4 > I'm still in the process of digesting HS Lahman's (Do you have a first name?) epistle on the subject! I have a first name, but despise it. I go by the initials, which most of my friends contract to "H". That post was a mere footnote compared to others. > P.S. Sorry to bring the tone down... recent discussions have been very interesting, but mostly way over my head - I start thinking of Star Trek every time wormholes are mentioned! Yes, clearly several of us have far too much time on our hands and we start counting angels at the drop of a pin. (Funny you should mention... I have have been using the wormhole as an inside joke in a thread on OTUG.) -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > I have probably been confusing you you saying "unavoidable" instead > of "inevitable". I do happen to believe in the concept of an > implemenation-free model. The forces againt it seem to be > political and historical. Ah, good. My karma is adjusted again. > However, in the past, *you* have asserted that OOA is not > implementation-free. You have pointed to depth-first vs > breadth-first iterations to support your argument. I have > argued againt this. Do you beleive (agree) that, in an > "ideal" environment, the structure of a pollution-free OOA > model does not influence the performance of the > implementation. (I use "ideal" in the strict mathematical > sense). Wow, how do you keep track of all this stuff? I can't even remember that particular argument! The only problem I recall having with breadth-first loops is that until OOA96 that was the only way to iterate and there are situations (e.g., changing the order of ordered sets within an iteration) where you _must_ do depth-first iteration. But I would regard that as a notation issue, not an implementation issue. (At least now I do. ) I do recall that in our first S-M pilot project we happened to have a nested loop situation spanning several objects where the order of nesting depended upon the hardware implementation. At the time the hardware guys did not know which would be faster so we would have had to change the order that events were issued in the model if we guessed wrong. However, we eventually figured out (after the system had been built, of course) a more generic way to control the nesting that merely depended upon a specification object's values. To answer the question, I still have a niggling fear that there is a situation where performance considerations would necessarily have to appear in an OOA. But I currently have no plausible examples to demonstrate such a situation. That's why I was bugging you about it in this thread; I was looking for examples. The basis of my reservation is that it seems to me a performance problem could be on a grander scale than individual actions so that to resolve it one would have to modify the interactions between two or more state machines. The nested loop problem above is an example of the sort of problem I think might happen, though that particular case is resolvable in general. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com smf@cix.compulink.co.uk (Mike Finn) writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Leslie Munday... > It's impossible to completely separate design from analysis. As soon you > write that first requirement, or create a section heading or discover > your first object, you have effectively made a design decision. Eeek! That word again... DESIGN. It's just the way you use the word (above) that rankles. For a good few years now, we've been imploring people to keep the Design out of the Analysis which *seems* to contradict your statement. To reduce the confusion I think we need a new word to describe what you mean. Lets call it - Scheme (for want of a better word). Scheme - The approach to be used in the Application Domain. Or choosing between two or more equally valid analysis models. Design - The approach to be used in the Software Architecture Domain. Or choosing between two or more equally valid analysis models. Rewriting your quote: It's impossible to separate the scheme from the analysis. As soon you write that first requirement, or create a section heading or discover your first object, you have effectively made a scheme decision. Mike -- Mike Finn Dark Matter | Email: smf@cix.co.uk Systems Ltd | Voice: +44 (0) 1483 755145 Allen Theobald writes to shlaer-mellor-users: -------------------------------------------------------------------- Thanks for all your input on Object-Oriented Software Engineering. I appreciate everything that was said to date, and it has given me additional insight into OOSE. >From what I gather S-M OOA would consist of system-level detailed requirements, a rough IM (but only for dividing system into domains), domain analysis (but not necessarily modeling), bridge definition, and finally planning/scheduling. And S-M Design would consist of the IM for each domain, the object communication model, the state model, the process model, and finally archetype-based decisions (assuming you are using a tool to translate, but what if your not?) I'm not asking to start a philosophical discussion :^), but as a practical matter. If someone asks for, 1. A System Design Document, and 2. A Software Design Document, what goes into them, and what would be a good layout? If it doesn't violate corporate policies, can someone share their outlines? Again, thanks for all the info! Keep it coming! Kind Regards, Allen Theobald "Lynch, Chris D. SDX" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Theobald >Since S-M is concerned with OOA, I would >guess that all of the above [OOA work products] >would end up in what is traditionally know >as a Software Requirements Specification (SRS)? Part of the problem is that OOA can be upstream or downstream of the SRS, depending on how it is used in the project. OOA is a system modeling method which can also be used to represent just the software part of a system, so it can be used either (a) as a tool for system requirements brainstorming with the domain experts or b) to give precise expression to a previously written software spec. My experience is with (b). My preference is to do OOA immediately downstream of the SRS and feedback problems/question from OOA into the SRS-writing process. >I'm familiar with IEEE standards and DoD standards with regard to >Software Reguirements Specifications. How does S-M OOA/RD fit in with >these quidelines? Not very well. While a PT paper on DoD 2167A describes the theoretical possibility of generating an SRS from OOA models by a mechanical procedure, I don't know of anyone who has done this. (It looks pretty daunting.) IMHO the output would be largely unusable; it would likely contain event/response pairs which are not required by the user and if insufficient thread of control information were supplied to the generator, the output would be incomplete. For organizations which require a formal SRS (and verification of everything in it), my advice is to proceed with this document with all haste. It may seem like double work, but your modeling will go much more smoothly once you have a reasonable version of the SRS. >If it doesn't what would be a "good" outline for constructing such a >document? Transition to OOA can be eased by doing some parts of the SRS in an object oriented way. For example, in a banking system, an SRS section on the "Customer object" (with data, functions, and behavior) has two good points: your client will understand it and your analysts have half their work done for them when they model the customer. Other parts of the system might be best organized by general function, e.g. (again for a bank) "nightly account posting" so as to be crystal clear in communicating with the client. Since in this example the clients are lawyers and accountants, it pays great dividends to make it easy for them! :-) Professional judgment is very important structuring the SRS, but it's better to start with a bad one than to fret forever about which one is best. Hope this help, -Chris ------------------------------------------- Chris Lynch Abbott Ambulatory Infusion Systems San Diego, Ca LYNCHCD@HPD.ABBOTT.COM ------------------------------------------- peterf@pathfindersol.com (Peter J. Fontana) writes to shlaer-mellor-users: -------------------------------------------------------------------- At 08:38 AM 9/3/98 -0400, shlaer-mellor-users@projtech.com wrote: >lahman writes to shlaer-mellor-users: >-------------------------------------------------------------------- >Yes, clearly several of us have far too much time on our hands and we start counting angels at the drop of a pin. That's "on the head of a pin". No wonder you keep coming up with the wrong count. ;-) _______________________________________________________ Pathfinder Solutions Inc. www.pathfindersol.com | 888-OOA-PATH | | effective solutions for software engineering challenges| | Peter Fontana voice: +01 508-384-1392 | peterf@pathfindersol.com fax: +01 508-384-7906 | _______________________________________________________| "Leslie Munday" writes to shlaer-mellor-users: -------------------------------------------------------------------- "Lynch, Chris D. SDX" writes to = shlaer-mellor-users: -------------------------------------------------------------------- [snip] >I'm familiar with IEEE standards and DoD standards with regard to >Software Reguirements Specifications. How does S-M OOA/RD fit in with >these quidelines? Not very well. While a PT paper on DoD 2167A describes the theoretical possibility of generating an SRS from OOA models by a mechanical procedure, I don't know of anyone who has done this. (It looks pretty daunting.) IMHO the output would be largely unusable; it would likely contain event/response pairs which are not required by the user and if insufficient thread of control information were supplied to the generator, the output would be incomplete. For organizations which require a formal SRS (and verification of everything in it), my advice is to proceed with this document with all haste. It may seem like double work, but your modeling will go much more smoothly once you have a reasonable version of the SRS. >If it doesn't what would be a "good" outline for constructing such a >document? Having performed OOA in a 2167A environment, here's my take: 1) My preference is not to separate AAO and OOD when using S-M. Just = write one document, starting with domain models, object diagrams, state = diagrams and data flow diagrams. Document each in turn and call it a = Software 'Structure' Document. (Been asked not to use that word, Ssshh! = design.) 2) If you must produce a SRS and an SDD, like I had to on my last = project, then document the S-M model as above and document it in an SRS. = Describe each of the data flow operations, in terms of the data flow in, = data flow out, the state, the object and the domain it is in. Put a = 'shall' in the description and label it as a requirement. The text is = then extracted from the diagrams and the requirements randomly shuffled = and replaced back into the document to give an arbitrarily ordered, = complete set of functional requirements. The diagrams can be placed in = an appendix, so that you can counter those reviewers that insist that = you're putting design (oops, had to say it) in an SRS, by saying "the = diagrams do not form part of the SRS, but are placed here to aid the = readers understanding ..". The diagrams can then be made into the content of the SDD with the same = descriptions, but now ordered to correspond to the diagrams, and with = the word 'shall' removed. My simplified view of DOD software development. As anyone working on a = DOD contract will know, the whole process generally takes 5-7 years, = independent of the size of the project. Les. [snip] Hope this help, -Chris ------------------------------------------- Chris Lynch Abbott Ambulatory Infusion Systems San Diego, Ca LYNCHCD@HPD.ABBOTT.COM ------------------------------------------- "Paul Higham" writes to shlaer-mellor-users: -------------------------------------------------------------------- I think you pulled an angel out of a hat with this one, Peter! <> paul <> In message "Re: (SMU) FW: An information modelling question...", shlaer-mellor-users@projtech.com writes: >peterf@pathfindersol.com (Peter J. Fontana) writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >At 08:38 AM 9/3/98 -0400, shlaer-mellor-users@projtech.com wrote: >>lahman writes to shlaer-mellor-users: >>-------------------------------------------------------------------- > >>Yes, clearly several of us have far too much time on our hands and we start >counting angels at the drop of a pin. > >That's "on the head of a pin". No wonder you keep coming up with the wrong >count. ;-) >_______________________________________________________ > Pathfinder Solutions Inc. www.pathfindersol.com | > 888-OOA-PATH | > | >effective solutions for software engineering challenges| > | > Peter Fontana voice: +01 508-384-1392 | > peterf@pathfindersol.com fax: +01 508-384-7906 | >_______________________________________________________| > > > lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Theobald... > And S-M Design would consist of the IM for each domain, the object > communication model, the state model, the process model, and finally > archetype-based decisions (assuming you are using a tool to translate, > but what if your not?) If you are not using a tool to translate, you would still have some specification of the architecture and the translation rules. I would consider that part of the design. > If someone asks for, > > 1. A System Design Document, and I would provide the domain chart updated with a block(s) to represent the real hardware and any other non-software components. That block would be attached to whatever domain (PIO in all the examples) that actually talks to the hardware. The non-software block(s) can be expanded to whatever level of detail seems appropriate, but there's probably some other spec for that to which you can point. I would include the domain and bridge descriptions. > 2. A Software Design Document, I would guess this is the entire OOA plus architecture description and translations rules. > what goes into them, and what would be a good layout? For the OOA the usual CASE tools all provide a standardized dump of the application that would be suitable for the software DD. At least some of them actually have a specific 2167 format. > If it doesn't violate corporate policies, can someone share their > outlines? Alas, we don't document to any of the standards you in which you are interested. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Fontana... > >Yes, clearly several of us have far too much time on our hands and we start > counting angels at the drop of a pin. > > That's "on the head of a pin". No wonder you keep coming up with the wrong > count. ;-) You pick the place, I pick the time. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Munday... > Having performed OOA in a 2167A environment, here's my take: 1) My > preference is not to separate AAO and OOD when using S-M. Just write > one document, starting with domain models, object diagrams, state > diagrams and data flow diagrams. Document each in turn and call it a > Software 'Structure' Document. (Been asked not to use that word, > Ssshh! design.) 2) If you must produce a SRS and an SDD, like I had > to on my last project, then document the S-M model as above and > document it in an SRS. Describe each of the data flow operations, in > terms of the data flow in, data flow out, the state, the object and > the domain it is in. Put a 'shall' in the description and label it > as a requirement. The text is then extracted from the diagrams and > the requirements randomly shuffled and replaced back into the > document to give an arbitrarily ordered, complete set of functional > requirements. The diagrams can be placed in an appendix, so that you > can counter those reviewers that insist that you're putting design > (oops, had to say it) in an SRS, by saying "the diagrams do not form > part of the SRS, but are placed here to aid the readers > understanding ..". The diagrams can then be made into the content of > the SDD with the same descriptions, but now ordered to correspond to > the diagrams, and with the word 'shall' removed. My simplified view > of DOD software development. As anyone working on a DOD contract > will know, the whole process generally takes 5-7 years, independent > of the size of the project. And the nation is being protected by the value added in Step 2? I'm glad I'm too old to care anymore. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Sam Walker writes to shlaer-mellor-users: -------------------------------------------------------------------- I have a question which I hope some of you can help me with. Is it bad SM style to end up with an OOA model which has seperate state actions which contain some common action? e.g. A.state_3 ( do some stuff ... do common stuff ) B.state_2 ( do some different stuff ... do common stuff ) [ I would draw ADFDs instead, but it get a little tricky with email] If not, does the SM method acknowledge this as a level of reuse? The reason I ask this as a method question and not a tool question is since ' Object Lifecycles, 6.4 Reuse of Processes' suggests it is very common to find the same process used in several ADFDs. This would answer my question except, it then goes on to define a process as an accessor, an event generator, a transformer, or a test . So I will re-formulate my question as, is there a level of re-use associated a 'group of processes' within the SM method? If not, why not? Lahman: Is it Hugo? __________________________ Sam Walker Software Engineer Advanced Technology Division Tait Electronics Ltd Phone (64) (03) 358 6683 Fax (64) (03) 358 0432 "Lynch, Chris D. SDX" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Munday: ----------------------- > > Having performed OOA in a 2167A environment, > here's my take: > > 1) My preference is not to separate AAO and OOD > when using S-M. Just write one document, starting > with domain models, object diagrams, state diagrams > and data flow diagrams. Document each in turn and > call it a Software 'Structure' Document. (Been asked > not to use that word, Ssshh! design.) > > 2) If you must produce a SRS and an SDD, like I had to on my last > project, then document the S-M model as above and document it in an > SRS. Describe each of the data flow operations, in terms of the data > flow in, data flow out, the state, the object and the domain it is in. > Put a 'shall' in the description and label it as a requirement. The > text is then extracted from the diagrams and the requirements randomly > shuffled and replaced back into the document to give an arbitrarily > ordered, complete set of functional requirements. The diagrams can be > placed in an appendix, so that you can counter those reviewers that > insist that you're putting design (oops, had to say it) in an SRS, by > saying "the diagrams do not form part of the SRS, but are placed here > to aid the readers understanding ..". > > The diagrams can then be made into the content of the SDD with the > same descriptions, but now ordered to correspond to the diagrams, and > with the word 'shall' removed. > > My simplified view of DOD software development. As anyone working on a > DOD contract will know, the whole process generally takes 5-7 years, > independent of the size of the project. > > Les. > [snip] > Hope this help, > > -Chris > > ------------------------------------------- > Chris Lynch > Abbott Ambulatory Infusion Systems > San Diego, Ca     LYNCHCD@HPD.ABBOTT.COM > > ------------------------------------------- > > > "Lynch, Chris D. SDX" writes to shlaer-mellor-users: -------------------------------------------------------------------- (sorry for the duplicate-- I hit some hidden send key... - CDL) > Responding to Munday: > ----------------------- > >Having performed OOA in a 2167A environment, >here's my take: > >1) My preference is not to separate AAO and OOD >when using S-M. Just write one document, starting >with domain models, object diagrams, state diagrams >and data flow diagrams. Document each in turn and >call it a Software 'Structure' Document. (Been asked >not to use that word, Ssshh! design.) > >2) If you must produce a SRS and an SDD, like I >had to on my last project, then document the S-M >model as above and document it in an SRS. Describe >each of the data flow operations, in terms of the >data flow in, data flow out, the state, the object >and the domain it is in. Put a 'shall' in the description >and label it as a requirement. The text is then >extracted from the diagrams and the requirements >randomly shuffled and replaced back into the document >to give an arbitrarily ordered, complete set of >functional requirements. The diagrams can be placed >in an appendix, so that you can counter those reviewers >that insist that you're putting design (oops, had to say it) >in an SRS, by saying "the diagrams do not form part >of the SRS, but are placed here to aid the >readers understanding ..". > >The diagrams can then be made into the content >of the SDD with the same descriptions, but now >ordered to correspond to the diagrams, and with >the word 'shall' removed. I can understand the desire to thumb one's nose at 2167A and what it represents. However, for "life and death" software there is no substitute for a **readable** SRS (or similarly detailed system requirements spec.) > -Chris > > ------------------------------------------- > Chris Lynch > Abbott Ambulatory Infusion Systems > San Diego, Ca LYNCHCD@HPD.ABBOTT.COM > ------------------------------------------- > > > Neil Lang writes to shlaer-mellor-users: -------------------------------------------------------------------- Sam Walker wrote: > > Sam Walker writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > I have a question which I hope some of you can help me with. > > Is it bad SM style to end up with an OOA model which has > seperate state actions which contain some common action? > > e.g. > > A.state_3 > ( > do some stuff > ... > do common stuff > ) > > B.state_2 > ( > do some different stuff > ... > do common stuff > ) > > [ I would draw ADFDs instead, but it get a little tricky with email] Absolutely not. In fact I've built models on more than one occasion in which the complete action for two states turns out to be the same. (They must be modeled as distinct states because the context differs) > > If not, does the SM method acknowledge this as a level of reuse? > The reason I ask this as a method question and not a tool question > is since ' Object Lifecycles, 6.4 Reuse of Processes' suggests it is > very common to find the same process used in several ADFDs. > This would answer my question except, it then goes on to define > a process as an accessor, an event generator, a transformer, or a > test . So I will re-formulate my question as, is there a level of > re-use associated a 'group of processes' within the SM method? If > not, why not? The method as it is currently defined allows for reuse at only the individual process level. There is formally no concept of reuse of a group of process. However I personally think that the idea of re-use of a group of processes ( analagous to a macro ) makes a lot of sense, and I think that progress was made towards that goal with the introduction of SDFD to handle stateless processing. This could serve as an anchor point for defining such reusable process groups. Disclaimer -- I'm speaking here only as an interested bystander and not as one of the authors of the OOA96 report. Hope this helps Neil ---------------------------------------------------------------------- Neil Lang neillang@pacbell.net ---------------------------------------------------------------------- lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Walker... > Is it bad SM style to end up with an OOA model which has > seperate state actions which contain some common action? > > e.g. > > A.state_3 > ( > do some stuff > ... > do common stuff > ) > > B.state_2 > ( > do some different stuff > ... > do common stuff > ) There is no problem with doing this. > If not, does the SM method acknowledge this as a level of reuse? > The reason I ask this as a method question and not a tool question > is since ' Object Lifecycles, 6.4 Reuse of Processes' suggests it is > very common to find the same process used in several ADFDs. > This would answer my question except, it then goes on to define > a process as an accessor, an event generator, a transformer, or a > test . So I will re-formulate my question as, is there a level of > re-use associated a 'group of processes' within the SM method? If > not, why not? S-M really only supports reuse at two levels: the process and the domain. The process reuse (other than accessors) is often limited in practice to a particular state machine. The domain reuse, OTOH, was a very powerful mechanism for component reuse long before it became fashionable. BTW, I should point out that there is a third type of reuse when you have subtyping. Notationally you can place the common actions for the subtypes in the supertype. > Lahman: Is it Hugo? Don't go there. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com "Stephen R. Tockey" writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote > > Responding to Munday... > > > Having performed OOA in a 2167A environment, here's my take: 1) My > > preference is not to separate AAO and OOD when using S-M. Just write > > one document, starting with domain models, object diagrams, state > > diagrams and data flow diagrams. Document each in turn and call it a > > Software 'Structure' Document. (Been asked not to use that word, > > Ssshh! design.) 2) If you must produce a SRS and an SDD, like I had > > to on my last project, then document the S-M model as above and > > document it in an SRS. Describe each of the data flow operations, in > > terms of the data flow in, data flow out, the state, the object and > > the domain it is in. Put a 'shall' in the description and label it > > as a requirement. The text is then extracted from the diagrams and > > the requirements randomly shuffled and replaced back into the > > document to give an arbitrarily ordered, complete set of functional > > requirements. The diagrams can be placed in an appendix, so that you > > can counter those reviewers that insist that you're putting design > > (oops, had to say it) in an SRS, by saying "the diagrams do not form > > part of the SRS, but are placed here to aid the readers > > understanding ..". The diagrams can then be made into the content of > > the SDD with the same descriptions, but now ordered to correspond to > > the diagrams, and with the word 'shall' removed. My simplified view > > of DOD software development. As anyone working on a DOD contract > > will know, the whole process generally takes 5-7 years, independent > > of the size of the project. > > And the nation is being protected by the value added in Step 2? > I'm glad I'm too old to care anymore. I have heard of this situation (lots of government money being spent for no apparent value added) as "Welfare With Dignity". Seems like it might be a rather accurate description after all. :^) Now, about those angels... Are they most productive in teams of three to four? Cheers, -- steve Carolyn Duby writes to shlaer-mellor-users: -------------------------------------------------------------------- Sam Walker wrote: > > Sam Walker writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > I have a question which I hope some of you can help me with. > > Is it bad SM style to end up with an OOA model which has > seperate state actions which contain some common action? > > e.g. > > A.state_3 > ( > do some stuff > ... > do common stuff > ) > > B.state_2 > ( > do some different stuff > ... > do common stuff > ) > You might try factoring the "common stuff" into a new state and connect both state_3 and state_2 via a self-directed event. This is not always possible because state_2 and state_3 may have different destination states for the same event. About the best solution I've seen so far is to put "common stuff" in a service of the domain and invoke a wormhole to the service in state_2 and state_3. I think this is an area where OOA could benefit from object-based services. Carolyn -- ________________________________________________________ Pathfinder Solutions Inc. www.pathfindersol.com | 888-OOA-PATH | | effective solutions for software engineering challenges | | Carolyn Duby voice: +01 508-384-1392 | carolynd@pathfindersol.com fax: +01 508-384-7906 | ________________________________________________________| Allen Theobald writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > > If it doesn't violate corporate policies, can someone share their > > outlines? > > Alas, we don't document to any of the standards you in which you are > interested. Actually it doesn't have to be DoD or IEEE. I only mentioned these, because they are the only docs I have reference to. **Any** outline, from anyone would be welcome. Kind Regards, Allen Theobald Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- Carolyn Duby wrote: > Sam Walker wrote: > > Is it bad SM style to end up with an OOA model which has > > seperate state actions which contain some common action? > You might try factoring the "common stuff" into a new state > and connect both state_3 and state_2 via a self-directed event. > This is not always possible because state_2 and state_3 may > have different destination states for the same event. About > the best solution I've seen so far is to put "common stuff" in > a service of the domain and invoke a wormhole to the service > in state_2 and state_3. I think this is an area where OOA could > benefit from object-based services. First, if there is a significant amount of repetition of actions then I would investigate the chosen abstractions. You may have missed an active object somewhere; or you may even have some pollution. However, if you are re-using processes then you will have less processes than [states * average-number-of-processes-per-state] in your model. In this situation, it is inevitable that there will be some repetition (cf. what is the probability that 2 people from a group of 30 will share a common birthday). This implies that there will be some coincidental-cohesion in the model. It is *always* wrong to attempt to factor such coincidences into re-usable blocks. (An optimiser may factor such blocks; but they are not re-usable in the accepted meaning of the term) In the case where the repetition is not coincidental, I would always worry about it; and, if you can't factor it, justify it. Dave. -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. peterf@pathfindersol.com (Peter J. Fontana) writes to shlaer-mellor-users: -------------------------------------------------------------------- At 08:15 AM 9/4/98 -0500, shlaer-mellor-users@projtech.com wrote: >"Stephen R. Tockey" writes to shlaer-mellor-users: >-------------------------------------------------------------------- >Now, about those angels... Are they most productive in teams of three >to four? Actually (assuming a large project context), you assign one angel to a team of 2-3 mortals for best overall project productivity... _______________________________________________________ Pathfinder Solutions Inc. www.pathfindersol.com | 888-OOA-PATH | | effective solutions for software engineering challenges| | Peter Fontana voice: +01 508-384-1392 | peterf@pathfindersol.com fax: +01 508-384-7906 | _______________________________________________________| "Leslie Munday" writes to shlaer-mellor-users: -------------------------------------------------------------------- Chris, Don't misunderstand me. I'm not knocking 2167A, just some of the people = that use it. 2167A is great for Structured Analysis/Structured Design, because there = is a real paradigm shift from SA->SD. The documentation suits this well. If using 2167A for OOA/D the process needs to be quite radically = altered. The SRS and SDD need to be restructured, preferably into a = single document. My original tongue-in-cheek comments were knocking the people that = follow the process not the process itself. Leslie. I can understand the desire to thumb one's nose at 2167A and what it represents. However, for "life and death" software there is no substitute for a **readable** SRS (or similarly detailed system requirements spec.) > -Chris > > ------------------------------------------- > Chris Lynch > Abbott Ambulatory Infusion Systems > San Diego, Ca LYNCHCD@HPD.ABBOTT.COM > ------------------------------------------- > > > "Bob Dodd" writes to shlaer-mellor-users: -------------------------------------------------------------------- Just out of interest, how large are the model fragments you want to = re-use? My experience has been that yes, there are times when there are small fragments of action language do get repeated between states, but they = tend to be only two or three line fragments. When reviewed, those fragments that = were larger than three lines were in areas of our modelling that caused most = debate during review, and tended to point towards problems in the IM. = (occasionally the state model but mostly abstraction problems in the IM). Of those small "acceptable" fragments, they divided into two basic = groups: a) navigation to other instances within the domain b) short algorithms e.g. checking if a caller to a telephone exchange is = already engaged in another call. Our navigation fragments were equivalent largely to accessors on ADFDs = (my projects tend not to use ADFDs but rather use action language directly = to describe state actions). So these we cause by not using ADFDs and hence missed = out on process re-use. The algorithm fragments tended to be shared between states, not so much = on a single state model, but across a "family" of state models, all of which were = subtypes of the same object. We didn't have many of these fragments, mostly because of = our use of subtype migration (see below). The fragments concerned would not = have mapped 1:1 to ADFD but rather to a small group of ADFD processes and hence even = ADFDs would not easily have capture the re-use (Suggestion: how about allowing = certain categories of ADFD process to decompose like structured design DFDs do, that would = probably have covered our requirements). - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - = - - - - - - - - - - - - - - - - - - - At the risk of repeating at least some of what has already been said on = re-use in Shlaer-Mellor... ** Primary re-use within Shlaer-Mellor is domain re-use. Which makes = sense when you think about how expensive an IM and OCM are to develop compared = to the cost of duplicating the occasional process fragment. ** ADFD processes may be re-used, though mostly these thend to be just = accessors. Handy, but any RD process/CASE tool worth its salt should identify = that sort of duplication for you - ADFD processes are only di-graphs and they can = easily be checked for topological equivalence. ** In theory at least, splicing allows common active behavior to reside = in a super-type and be shared between sub-types. In practice, splicing is a problem = in that existing CASE tools and commerical architectures don't povide exactly = brilliant support for this re-use technique. ** Common fragments of state model may be re-used through sub-type = migration. Classic sub-type migration (e.g "working robot --> failed robot") = aims to model different behaviour to the same external events such as "safe" = behavior in the case of exceptions. However, we also have cases where the role = played by an object changes over time, a typical example being the user of a = telephone. Sometimes he is a caller, sometimes the called, yet once the call is = established both the called and the calling user have simliar though not = identical behavior (e.g. you can put someone on hold regardless of whether you were = "called" or "calling"). In this case common behavior after connect could be accomplished by = (say) three migrating sub-types: "called", "calling", "connected". Re-use = of the disconnect behavior would then be possible. At least one project I worked on = made heavy and successful use of this technique. ** Finally, the quality of the action language used to specify will = affect the size and scop of the re-used fragments. Unless you are lucky enough to be in a = position ot develop your own company-specific action language, this will not be a = practical way to improve re-use. Example of what can be done are: Kennedy-Carter style = polymorphic synchronous services to aC++ style "name space", labelled code = fragments, and hierarchical ADFDs. Bob Dodd bob-dodd@dircon.co.uk "Dean S. Anderson" writes to shlaer-mellor-users: -------------------------------------------------------------------- Neil Lang wrote: > The method as it is currently defined allows for reuse at only > the individual process level. There is formally no concept of > reuse of a group of process. > > However I personally think that the idea of re-use of a group of > processes ( analagous to a macro ) makes a lot of sense, and I > think that progress was made towards that goal with the introduction > of SDFD to handle stateless processing. This could serve > as an anchor point for defining such reusable process groups. > In fact, in our toolset this was how we implemented iterative processes. You define an SDFD for the iterative process and then invoke it like any other process. In essence the SDFD becomes a macro. We also wound up using it to handle repeated sections of state model, not just iterative processes. SDFDs are a great idea that helped a lot. Dean S. Anderson (formerly of Transcrypt International) ka0mcm@mail.winternet.com Sam Walker writes to shlaer-mellor-users: -------------------------------------------------------------------- Carolyn Duby wrote >You might try factoring the "common stuff" into a new state and >connect both state_3 and state_2 via a self-directed event. Then I would ask, how did you arrive at this new state? I believe it has been invented to cope with these repeated process fragments, and that it is not part of the objects true lifecycle. I would always question the addition of new states at process modelling time 'Is this really a new state, or have I just invented this as a place holder for behaviour?'. >About the best solution I've seen so far is to put "common stuff" >in a service of the domain and invoke a wormhole to the >service in state_2 and state_3. I think this is an area where OOA >could benefit from object-based services. I agree. Sam __________________________ Sam Walker Software Engineer Advanced Technology Division Tait Electronics Ltd Phone (64) (03) 358 6683 Fax (64) (03) 358 0432 lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Theobald... > > > Alas, we don't document to any of the standards you in which you are > > interested. > > Actually it doesn't have to be DoD or IEEE. I only mentioned these, because > they are the > only docs I have reference to. **Any** outline, from anyone would be > welcome. OK, FWIW... But I don't think it will help much because we tend to think of the OOA as one relatively small part of the overall design process. For background, since software is only one part of fairly complex systems, our division has an overall development process with 9 phases that is basically a waterfall. The software development is mapped into that process. Only the middle five of those phases are relevant to software development. However, it is not uncommon for the software group to develop iteratively with a bunch of small waterfalls. In that case we have to ensure that the required work products are available at the division's phase transitions, regardless of when we did the actual work. The basic work products that we produce are: Statement of Requirements. Marketing is supposed to provide this but in practice we write them and they are supposed to review them. This comes in two flavors: preliminary and final, depending upon the division process phase. Project Plan. This is very much like the IEEE version. The key difference is that there is a table that itemizes what product test QA will use to verify each requirement. This also comes in two flavors. Functional Specification. This is a black box description of the software functionality from the user's view. It is quite detailed, down to GUI button labels. However, in discussion the overall functionality it is necessary to identify major system components, so we include the Domain Chart with the FSpec. This also comes in two flavors. It is delivered with the final Project Plan. Information Models. Because of the need to provide an accurate schedule for the project plan, we have to do the IM early so we can get object counts. Traceability Matrix. This is a table that identifies specific tests that the QA will develop to test requirements. Each requirement must have at least one QA test associated with it. QA develops these tests as the software is developed using the Functional Specification to determine how to enter the test data. Typically Software is not responsible for generating this table since it also includes hardware and mechanical specifications. The remaining OOA. State models, action language, and other OOA stuff are done in a single phase of the division's process. Our CASE tool dumps a reasonable presentation as a "domain notebook" in ASCII along with the diagrams. We try to make domain, object, attribute, and relationship descriptions and these are carefully reviewed. Therefore we may backfill these for the DC and IM done earlier. Implementation Specification. This is a whitebox specification that describes in detail how the software will be implemented for all non-OOA code. We have a fair amount of non-OOA development for language translators and whatnot. We also provide an ISpec to deal with bridges, transforms, etc. in the OOA portion. Test plans. These are the tests that Engineering runs. We tend to think of these as use cases because we are moving towards an incremental development where we completely implement one feature at a time. For OO development we try to use the same test cases for domain test, software integration, and hardware integration testing. We devote a fair amount of effort to providing test harnesses to do this and to make generating test easier. The biggest benefit lies in regression testing that can be automated. Architecture specification. When we can't use automatic code generation for performance reasons, we provide an architecture specification that defines the architecture and translation rules. Delivery of the completed and reviewed work products form the major milestones for the development process, regardless of the size of the project. In addition there are other milestones related to completion of specific tasks that may or may not be present, depending upon the size of the project (e.g., for a large project we might break out state models from process models, but for adding features to models this might just be "complete OOA"). Some tasks always have milestones: domain test, unit test, translation, and software integration test. In our shop we are very heavy on reviews -- all work products except code have peer reviews, sometimes multiple reviews. (We don't do code reviews because we have found that our specifications have sufficient details that the types of errors found are more efficiently found in unit test and domain simulation.) As a cultural thing, we strive for a high level of detail in specifications (long before OO we figured out that writing code is the least important thing done in software development). We are also big on defect prevention in the process, which drives the review process. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Tockey... > > And the nation is being protected by the value added in Step 2? > > I'm glad I'm too old to care anymore. > > I have heard of this situation (lots of government money being spent > for no apparent value added) as "Welfare With Dignity". Seems like it > might be a rather accurate description after all. :^) My problem is that verifying requirements is a laudable objective. But if the tool for doing that encourages this, then the original objective is lost. > Now, about those angels... Are they most productive in teams of three > to four? I would say that based upon recently acquired data the answer would have to be one. [An inside joke, folks. Steve's paper makes a persuasive case that the minimum cost development has one developer.] -- H. S. Lahman There is nothing wrong with me that Teradyne/ATB could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com tristan.pye@aeroint.com writes to shlaer-mellor-users: -------------------------------------------------------------------- I have been given the task of evaluating a number of OOA tools for use in our company. I would be grateful for any feedback from any users of any of them. (I know this is a PT run site, but I'm sure they won't mind some healthy competition!) The products we have been recommended so far are: BridgePoint (PT) iOOA (KC) Rational Rose We are particularly interested in automatic generation of code direct from the models, however this will not be your bog standard C or C++, so a large degree of flexibility is required. We also need to reverse engineer our existing models into whatever we choose, probably from data contained in our own OOA of OOA database. Any comments are welcome on these or any other products you recommend/hate. Please feel free to reply direct if you think the info will not be of interest to others on the group. Many thanks, Tristan Pye -------------------------------- Tristan Pye Aerosystems International www.aeroint.com +44 (0)1935 443103 tristan.pye@aeroint.com Thursday 10 September 1998, 5:49 pm -------------------------------- bgrim@ses.com (Bob Grim) writes to shlaer-mellor-users: -------------------------------------------------------------------- Don't be fooled by my email address, I am not an employee of SES but I am a consultant specializing in OOA/RD and my favorite tool for doing OOA/RD using the S/M methodology is definitely Objectbench. You can read about it at their web site (www.ses.com). The only competing tool I have used is BridgePoint and I will be glad to share my reasonings why I prefer Objectbench to it. Let me preface these comparisons by saying I haven't seen BridgePoint in over a year and a half. 1) Objectbench has the best graphical simulator I have seen with a case tool. By using this to its capacity, I have had great success testing my analysis and KNOWING it is correct before code was even generated (much less tested on the target). Bridgepoints simulator was text only and I found verifying a model in bridgepoint was much like reading a log file -- rather cumbersome, tedious, and error prone. 2) Code Generation. I am currently working on a project where we use SES's code generator and architecture. It is going very well and we do have 100% code generation. SES also has several architectures to start from. I know Bridgepoint boasts this capability as well. 3) Action Language. Objectbench's action language (process models) is a superset of the C language. I found BridgePoints action language very limiting compared to the extensions SES put into their's. I have to admit that having the C language as part of the action language does allow the opportunity for design to creep into the models. 4) Non standard S/M additions. What I mean by this is that SES has put some things into the tool that may not completely "jive" with "pure" Shlaer/Mellor OOA theory. However, these additions were typically requested by customers and I do find them extremely useful. BridgePoint didn't have these extensions. Examples are: prioritized events, explicit come to arcs, etc... That's enough. I have to get back to work. Thanks Bob Grim Gerhard Kessell-Haak (Gerhard Kessell-Haak) writes to shlaer-mellor-users: -------------------------------------------------------------------- >I have been given the task of evaluating a number of OOA tools for use >in our company. I would be grateful for any feedback from any users of >any of them. (I know this is a PT run site, but I'm sure they won't mind >some healthy competition!) The products we have been recommended >so far are: > >BridgePoint (PT) iOOA (KC) >Rational Rose If you're looking for real-time CASE tools, then ObjecTime doesn't look too bad either; though I can't say too much because I've only ever seen demonstrations of the product. Interestingly, they're in a strategic alliance with Rational Rose - perhaps an admission by Rational that their tool is not perfectly suited to RT development. jrwolfe@projtech.com (John R. Wolfe) writes to shlaer-mellor-users: -------------------------------------------------------------------- I realize that Tristan opened the door here, but please allow me to remind you all that this list is reserved for noncommercial messages. Without this guideline, the list will quickly become nothing more than a SPAM bucket for every software vendor in the universe. If you want to debate this issue with me, please do so with direct e-mail or a phone call. I expect mine to be the very last note posted to this particular thread. Thanks, --JRW ------ Shlaer-Mellor Method for Real-Time Software Development ------------- John R. Wolfe Tel: (520) 544-2881 ext. 12 President and CEO Fax: (520) 544-2912 Project Technology, Inc. email: jrwolfe@projtech.com 7400 N. Oracle Road - Suite 365 URL: http://www.projtech.com Tucson, AZ 85704 ------------------------- Real-Time, On Time ------------------------------- "Leslie Munday" writes to shlaer-mellor-users: -------------------------------------------------------------------- Bob, I can't disagree with any of your points, but you forgot to mention that = ObjectBench has one of the most out-of-date, non-standard, crummiest = model builder user interfaces of any CASE tool on the market. Otherwise it's fine. Just my opinion, Leslie. Don't be fooled by my email address, I am not an employee of SES but I am a consultant specializing in OOA/RD and my favorite tool for doing OOA/RD using the S/M methodology is definitely Objectbench. You can read about it at their web site (www.ses.com). The only competing tool I have used is BridgePoint and I will be glad to share my reasonings why I prefer Objectbench to it. Let me preface these comparisons by saying I haven't seen BridgePoint in over a year and a half. 1) Objectbench has the best graphical simulator I have seen with a case tool. By using this to its capacity, I have had great success testing my analysis and KNOWING it is correct before code was even generated (much less tested on the target). Bridgepoints simulator was text only and I found verifying a model in bridgepoint was much like reading a log file -- rather cumbersome, tedious, and error prone. 2) Code Generation. I am currently working on a project where we use SES's code generator and architecture. It is going very well and we do have 100% code generation. SES also has several architectures to start from. I know Bridgepoint boasts this capability as well. 3) Action Language. Objectbench's action language (process models) is a superset of the C language. I found = BridgePoints action language very limiting compared to the extensions SES put into their's. I have to admit that having the C language = as part of the action language does allow the opportunity for = design to creep into the models. 4) Non standard S/M additions. What I mean by this is that SES has put some things into the tool that may not completely "jive" with "pure" Shlaer/Mellor OOA theory. However, these additions were typically requested by customers and I do find them extremely useful. BridgePoint didn't have these = extensions. Examples are: prioritized events, explicit come to arcs, = etc... That's enough. I have to get back to work. Thanks Bob Grim "Nau, Peter" writes to shlaer-mellor-users: -------------------------------------------------------------------- Before you pick an OOA-RD tool, it might be a good idea to choose a method and notation first! This may be self-evident, but if not, it's an important point. Method is not the same as notation, and method is more important than notation. If you like, you can do S-M with a UML subset (see PT web site), but as far as I know, the S-M tools don't support UML. If you really must use UML, then, obviously, you can eliminate all the non-UML tools. (However, the non-UML tools will no doubt be modified someday to use a subset of UML, even if they are S-M tools.) All that being said, it's probably not a good idea at this time to use a UML-only tool if you're going to do S-M. Let's make some assumptions: o You're using S-M OOA-RD. o You want to buy a modifiable, portable code generator. o You're not using UML. This narrows the field significantly. Using C as the action language is probably a bad idea. Again, see the P-T web site for action language discussions. Also, if you have a good code generator, a simulator may not be very important. Based on our evaluation a few years ago, this left as the leaders: o Bridgepoint and associated tools (PT) o iOOA (Kennedy-Carter, in the UK) We liked both products, but KC's limited presence and support ability in the U.S. knocked them out of the running for us. As for code generation, in my opinion, the BP technology is extremely well thought out. It is target-portable and very flexible. --- Peter Nau St. Jude Medical, Inc., manufacturer of embedded cardiac defibrillators and pacemakers Sunnyvale, Calif. "Bob Dodd" writes to shlaer-mellor-users: -------------------------------------------------------------------- >I have been given the task of evaluating a number of OOA tools for use in our company. I would be grateful for any feedback from any users of any of them. (I know this is a PT run site, but I'm sure they won't mind some healthy competition!) >The products we have been recommended so far are: > >BridgePoint (PT) >iOOA (KC) >Rational Rose I was involved in the evaluation of BirdgePoint & iOOA a couple of years back, so my impressions are probably a little bit dated but here goes. Hopefully, most of these tools have moved on since our evaluation... We actually looked at five possible tools: SES Object Bench, iOOA, ObjectTeam, and developing our own tools (which I won't discuss). We were in the unique position of having already developed out own code generation tools and simulator, much as you seem to be planning to do yourselves, and this naturally coloured our evaluation criteria. Taking each tool in turn 1) SES Object Bench Nice simulator (but we already had a better one that fully supported Shlaer/Mellor including domains & bridges which Object Bench did not), but remove the simulator and it was a very average drawing tool that didn't support all the Shlaer/Mellor work products and at the time was competley missing any support for domains, bridges, and had limited OCM support. There was a limited API for reading and none for writing to the internal database and no way to easily extend/interface to the tool. We felt we couldn't easily code generate from the model, nor could we integrate the tool with the rest of our tool-chain. It was the easiest product to dismiss. 2) iOOA We were caught a little "between versions" on this one, and the whole user interface has changed since our evaluation. Frankly, from an analyst point of view, two years ago iOOA provided the best environment for modelling in Shlaer-Mellor from the domain chart downwards, in particular it is the only time I have ever felt happy working WITH the OCM rather than against it... Also worth noting was the versioning approach taken to the domain chart, and the transaction contol/rollback of changes to the models. Domains have versions, and a domain chart consists of bridges between these versioned domains. iOOA also provided limited support for Use-Case specification and for population tables. I belive that in both these areas they are making significant improvements. iOOA also had its own simulation facilities, however these facilites were not fully investigated because of our own extensive simulation & test tools OK, now for the negatives... a) iOOA didn't support subsystems whcih was a shame, but frankly we didn't use them much anyway. On projects where I have used the SRM &SCM it has tended to be where domains were not supported by the RD and subsystems got used as "pretend" domains. b) iOOA is a very "closed" system. They provided a good API for read but none for write, and limited access to integrate their tool within our toolchain. They were also very firm about NOT providing us with write access to their database under any circumstances, though maybe they would be more understanding these days. Access to the database was one of our main concerns and it was one of the prime reasons iOOA failed evaluation. Write access to the database is really important: firstly you need a good API to make code generation easier, and secondly you (or at least "we") need write access to extend the colourisation process to make code generation work, and to fix problems & add missing functionality to the CASE tool. The last thing you want is to get stuck with a tool that you can't fix and can't afford to replace (politically if not financially). 3) ObjectTeam The greatest thing PT ever did for humanity was to buy up ObjectTeam and ditch it... It was the tool we were using before, and the reason for, our evaluation. I know there are projects out there that are still using the blasted thing and you have my sympathy. All that said, ObjectTeam had an open interface to it's database: pictures and data dictionary, a basic SQL-like interface for report generation, and we could launch individual edit sessions from an integrated set of development tools. Then again, it was so bad you needed that flexibility. 4) Bridgepoint As with iOOA we were on the "cutting edge" of the tool's development, so again I won't comment on the user interface or its reliability. My first reaction to Bridgepoint was that it was a more "natural" CASE tool to use, without the transaction locking/logging concepts of iOOA. It had most of the basic Shaler-Mellor support I expected, though without the domain & bridge support you might expect. It also had some non Shaler-Mellor work products which came as a surprise (to try and cope with not having domain support I think). Overall though, with the exception of domains, it had everything you would expect from an analysis perspective, and some things like the action language changing as you renamed events, was pretty good. Bridgepoint also had its own simulation facilities, however these facilites were not fully investigated because of our own extensive simulation & test tools It's when we looked at Bridgepoint from a RD point fo view, that it scored well. There was only a limited API to the database BUT the whole model was held in a proper ObjectStore database and hence with a little work we could read/write/extend the data model. We found PT remarkably cooperatve and helpful once we explained what we wanted to do and why (the "why" was very important to them. I think they also wanted to be sure we knew what we were doing before they let us loose on their data schema). The access to the data, and PT's attitude pretty well swung it for them. Now for the negatives... a) What on earth was PT doing, selling a CASE tool that didn't even conform to the books, let alone OOA96? The answer probably lies somewhere in the origins of Bridgepoint... Hopefully this issue has been properly addresed by now. b) Like iOOA, Bridgepint tries hard to be a "closed system", though perhaps not as hard as iOOA, and we found it difficult to integrate with other developent tools. Not at the databse level, more tring to launch edit sessions etc. c) The action language built into Bridgepoint at the time (maybe this has changed), was not very nice, and was neither 'SMALL' nor KC-style 'ASL'. Also, the mapping between the action language and the pictures was hard-coded. If you didn't use their action language, you didn't get the automatic update. Since we alread had a large system of 5 domains to port (with our own action language...) this was a problem. Even if we ported to their action language, we would have had to do it again when Bridgepoint moved over to SMALL. Again this problem may now have gone away for most people. Conclusion ---------------- > We are particularly interested in automatic generation of code direct from the models, > however this will not be your bog standard C or C++, so a large degree of flexibility is > required. We also need to reverse engineer our existing models into whatever we choose, > probably from data contained in our own OOA of OOA database. If RD is your main concern, and if you really plan to write your own code generator and archetype models, you need the same access the the model database that we did. Make sure that whichever supplier you choose gives that access with proper support (and I would suggest you get it written explicity in to the sales contract...) I think you also need to consider your testing strategy. To simulate or not to simulate? My experience with this, is that simulation makes test & debug of the RD easier, if the model you have to translate has been simulated first... at least there is some chance that the bugs you see in the target system are caused by the RD and not the OOA. If the system you are building has many domains (and it should...) then you really need to make sure your database has some way of representing these domains and their bridges cleanly, either because the CASE tool already supports them, or because you have enhanced the database yourself. One word on Rational Rose by the way. Rose doesn't support Shlaer-Mellor. Period. At best it supports UML (or at least the 3 Amigo's view of UML) and hence instantly we have problems. You can make UML represent Shlaer-Mellor diagrams but you have to work at it, which means having to do a lot of work mapping the UML & SM pictures and rules, and even then it's difficult. There are good discussion papers from PT and KC on the subject but you would be best advised to wait for the dust to settle before you apply those ideas to Rose. It's important, because if you don't stick to the SM rules for model behaviour, you will come seriously unstuck in the RD. Also remember that Rational "believes" in elaboration, not translation, and are very unlikely to provide much support for translation in Rose. So I'm afraid that for now I wouldn't touch Rose with a barge-pole. Personally, if I had to choose a CASE tool, (based on an OLD evaluation) I would be torn between iOOA and Bridgepoint. ** As an analyst, I would go straight for iOOA, for all of its many faults. ** As an RD person, it would be Bridgepoint hands down. ** As a project manager, I would go for Bridgepoint because of the RD. Bob Dodd. bob-dodd@dircon.co.uk baryh@airmail.net (Bary Hogan) writes to shlaer-mellor-users: -------------------------------------------------------------------- I was also involved in an evaluation of these tools a few years ago, and I just wanted to add one point that brings up methodology differences which, I believe, are appropriate for discussion on this group. One BIG difference for us between BridgePoint and I-OOA was support for synchronous processes. I-OOA supports synchronous processes associated with an object (both object based and instance based) and also associated with a domain. Bridgepoint, I believe, only allows synchronous processes to be associated with a domain. Interestingly enough, we had independently developed, and were already utilizing, synchronous processes associated with an object. So, this gave I-OOA an advantage in our evaluation. KC's support for synchronous services is described in their OOA97 document which can be found at http://www.kc.com/html/download.html Bary Hogan LMTAS On Fri, 11 Sep 1998 03:07:00 -0000, you wrote: >"Bob Dodd" writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >>I have been given the task of evaluating a number of OOA tools for use in >our company. I would be grateful for any feedback from any users of any of >them. (I know this is a PT run site, but I'm sure they won't mind some >healthy competition!) >>The products we have been recommended so far are: >> >>BridgePoint (PT) >>iOOA (KC) >>Rational Rose > > >I was involved in the evaluation of BirdgePoint & iOOA a couple of years >back, so my impressions are probably a little bit dated but here goes. >Hopefully, most of these tools have moved on since our evaluation... > >We actually looked at five possible tools: SES Object Bench, iOOA, >ObjectTeam, and developing our own tools (which I won't discuss). We were in >the unique position of having already developed out own code generation >tools and simulator, much as you seem to be planning to do yourselves, and >this naturally coloured our evaluation criteria. Taking each tool in turn > >1) SES Object Bench > >Nice simulator (but we already had a better one that fully supported >Shlaer/Mellor including domains & bridges which Object Bench did not), but >remove the simulator and it was a very average drawing tool >that didn't support all the Shlaer/Mellor work products and at the time was >competley missing any support >for domains, bridges, and had limited OCM support. There was a limited API >for reading and none >for writing to the internal database and no way to easily extend/interface >to the tool. We felt we couldn't >easily code generate from the model, nor could we integrate the tool with >the rest of our tool-chain. It was >the easiest product to dismiss. > >2) iOOA > >We were caught a little "between versions" on this one, and the whole user >interface has changed since our >evaluation. Frankly, from an analyst point of view, two years ago iOOA >provided the best environment for modelling in Shlaer-Mellor from the >domain chart downwards, in particular it is the only time I have ever felt >happy working WITH the OCM rather than against it... Also worth noting was >the versioning approach taken to the domain chart, and the transaction >contol/rollback of changes to the models. Domains have versions, and a >domain chart consists of bridges between these versioned domains. iOOA also >provided limited support for Use-Case specification and for population >tables. I belive that in both these areas they are making significant >improvements. > >iOOA also had its own simulation facilities, however these facilites were >not fully investigated because of our own extensive simulation & test tools > >OK, now for the negatives... > >a) iOOA didn't support subsystems whcih was a shame, but frankly we didn't >use them much anyway. On projects where I have used the SRM &SCM it has >tended to be where domains were not supported by the RD and subsystems got >used as "pretend" domains. > >b) iOOA is a very "closed" system. They provided a good API for read but >none for write, and limited access to integrate their tool within our >toolchain. They were also very firm about NOT providing us with write access >to their database under any circumstances, though maybe they would be more >understanding these days. >Access to the database was one of our main concerns and it was one of the >prime reasons iOOA failed evaluation. Write access to the database is >really important: firstly you need a good API to make code generation >easier, and secondly you (or at least "we") need write access to extend the >colourisation process to make code generation work, and to fix problems & >add missing functionality to the CASE tool. The last thing you want is to >get stuck with a tool that you can't fix and can't afford to replace >(politically if not financially). > >3) ObjectTeam > >The greatest thing PT ever did for humanity was to buy up ObjectTeam and >ditch it... It was the tool we were using before, and the reason for, our >evaluation. I know there are projects out there that are still using the >blasted thing and you have my sympathy. All that said, ObjectTeam had an >open interface to it's database: pictures and data dictionary, a basic >SQL-like interface for report generation, and we could launch individual >edit sessions from an integrated set of development tools. Then again, it >was so bad you needed that flexibility. > >4) Bridgepoint > >As with iOOA we were on the "cutting edge" of the tool's development, so >again I won't comment on the user interface or its reliability. My first >reaction to Bridgepoint was that it was a more "natural" CASE tool to use, >without the transaction locking/logging concepts of iOOA. It had most of the >basic Shaler-Mellor support I expected, though without the domain & bridge >support you might expect. It also had some non Shaler-Mellor work products >which came as a surprise (to try and cope with not having domain support I >think). Overall though, with the exception of domains, it had everything you >would expect from an analysis perspective, and some things like the action >language changing as you renamed events, was pretty good. > >Bridgepoint also had its own simulation facilities, however these facilites >were not fully investigated because of our own extensive simulation & test >tools > >It's when we looked at Bridgepoint from a RD point fo view, that it scored >well. There was only a limited API to the database BUT the whole model was >held in a proper ObjectStore database and hence with a little work we could >read/write/extend the data model. We found PT remarkably cooperatve and >helpful once we explained what we wanted to do and why (the "why" was very >important to them. I think they also wanted to be sure we knew what we were >doing before they let us loose on their data schema). The access to the >data, and PT's attitude pretty well swung it for them. > >Now for the negatives... > >a) What on earth was PT doing, selling a CASE tool that didn't even conform >to the books, let alone OOA96? The answer probably lies somewhere in the >origins of Bridgepoint... Hopefully this issue has been properly addresed by >now. > >b) Like iOOA, Bridgepint tries hard to be a "closed system", though perhaps >not as hard as iOOA, and we found it difficult to integrate with other >developent tools. Not at the databse level, more tring to launch edit >sessions etc. > >c) The action language built into Bridgepoint at the time (maybe this has >changed), was not very nice, and was neither 'SMALL' nor KC-style 'ASL'. >Also, the mapping between the action language and the pictures was >hard-coded. If you didn't use their action language, you didn't get the >automatic update. Since we alread had a large system of 5 domains to port >(with our own action language...) this was a problem. Even if we ported to >their action language, we would have had to do it again when Bridgepoint >moved over to SMALL. >Again this problem may now have gone away for most people. > >Conclusion >---------------- > >> We are particularly interested in automatic generation of code direct from >the models, >> however this will not be your bog standard C or C++, so a large degree of >flexibility is >> required. We also need to reverse engineer our existing models into >whatever we choose, >> probably from data contained in our own OOA of OOA database. > > >If RD is your main concern, and if you really plan to write your own code >generator and >archetype models, you need the same access the the model database that we >did. >Make sure that whichever supplier you choose gives that access with proper >support >(and I would suggest you get it written explicity in to the sales >contract...) > >I think you also need to consider your testing strategy. To simulate or not >to simulate? >My experience with this, is that simulation makes test & debug of the RD >easier, >if the model you have to translate has been simulated first... at least >there is >some chance that the bugs you see in the target system are caused by the RD >and >not the OOA. > >If the system you are building has many domains (and it should...) then you >really need >to make sure your database has some way of representing these domains and >their >bridges cleanly, either because the CASE tool already supports them, or >because you >have enhanced the database yourself. > >One word on Rational Rose by the way. Rose doesn't support Shlaer-Mellor. >Period. >At best it supports UML (or at least the 3 Amigo's view of UML) and hence >instantly >we have problems. You can make UML represent Shlaer-Mellor diagrams but you >have to work at it, which means having to do a lot of work mapping the UML & >SM >pictures and rules, and even then it's difficult. There are good discussion >papers from >PT and KC on the subject but you would be best advised to wait for the dust >to settle >before you apply those ideas to Rose. It's important, because if you don't >stick to >the SM rules for model behaviour, you will come seriously unstuck in the RD. >Also >remember that Rational "believes" in elaboration, not translation, and are >very unlikely >to provide much support for translation in Rose. So I'm afraid that for now >I wouldn't >touch Rose with a barge-pole. > >Personally, if I had to choose a CASE tool, (based on an OLD evaluation) I >would be >torn between iOOA and Bridgepoint. > >** As an analyst, I would go straight for iOOA, for all of its many faults. >** As an RD person, it would be Bridgepoint hands down. >** As a project manager, I would go for Bridgepoint because of the RD. > >Bob Dodd. >bob-dodd@dircon.co.uk > > > > Allen Theobald writes to shlaer-mellor-users: -------------------------------------------------------------------- Greetings! What if you are using MFC/Visual C++ as your UI tool? Is the UI domain a service domain, implementation domain, or architectural domain? Why? Does it just get a mission statement and no models? Does it still have to "pass" the domain-replacement rule? Thanks, Allen Theobald Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- Allen Theobald wrote: > What if you are using MFC/Visual C++ as your UI tool? > > Is the UI domain a service domain, implementation domain, > or architectural domain? Why? > > Does it just get a mission statement and no models? > > Does it still have to "pass" the domain-replacement rule? Any GUI builder provides its own "modelling formalism" and translator. Therefore, the thing that you edit is a non-OOA domain on the domain chart; it has a translative bridge onto its implementation domain that is then connected to the rest of the system. You may decide that you want to give it a mission statement; and you will probably also use some technical notes. But I wouldn't do an OOA of the GUI itself. If you are using the MFC document-view architecture, then this gives you a good starting point for developing your primary architectural domain. And, even though you won't do an OOA of the GUI: you will probably want to do an OOA of your "view" and a separate domain for your "document". Dave. -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. peterf@pathfindersol.com (Peter J. Fontana) writes to shlaer-mellor-users: -------------------------------------------------------------------- At 11:25 AM 9/15/98 +0200, shlaer-mellor-users@projtech.com wrote: >Dave Whipp writes to shlaer-mellor-users: >-------------------------------------------------------------------- >Allen Theobald wrote: >> What if you are using MFC/Visual C++ as your UI tool? We've done a lot of this - both in the development of our own tools, and with client efforts. >> >> Is the UI domain a service domain, implementation domain, >> or architectural domain? Why? MFC is a collection of domains - a typical system may abstract 4 from this "pudding": GUI Mechanisms, Database, Operating System (for process and thread control), and Mechanisms (for CString, Clist, etc). If explicit file system services are needed, then File System may emerge. Services of Product Configuration may be needed (the Registry). It depends on what is needed. Don't let Microsoft's lead (by putting MFC into a single giant conceptual container) affect your domain analysis. I always though if they knew what they were doing over there, then they could deliver operating systems that would work properly. >> Does it just get a mission statement and no models? Yes, and a rigorously defined bridge interface. >> Does it still have to "pass" the domain-replacement rule? Yes. Think Motif. Then think of how much you want to be impacted by a port to "Win02" (or whatever). >Any GUI builder provides its own "modelling formalism" and >translator. Good point Dave. The "GUI Mechanisms" (per above) is a realized, translated domain. >But I wouldn't do an OOA of the GUI itself. That's right. >If you are using the MFC document-view architecture, then >this gives you a good starting point for developing your >primary architectural domain. And, even though you won't >do an OOA of the GUI: you will probably want to do an OOA >of your "view" and a separate domain for your "document". While this is theoretically possible, I haven't done OOA/RD on any application where the application domain was concerned with concepts that would come close to this type of "Document" or "View". There was usually much more stuff at a higher level that would make anything like this out of place. Instead, we sometimes have an analyzed "GUI Interface" domain (client to GUI Mechanisms) to provide the rest of the application with a higher level of abstraction of the services of the MFC-based GUI. For instance, if two dialogs that display data are to be closely connected in an application-specific manner, the GUI I/F builds this connection based on the straight VC++-level services that GUI Mechanisms provides. GUI I/F might have a "Document" and "View" that may correspond to higher level comcepts from its servers. _______________________________________________________ Pathfinder Solutions Inc. www.pathfindersol.com | 888-OOA-PATH | | effective solutions for software engineering challenges| | Peter Fontana voice: +01 508-384-1392 | peterf@pathfindersol.com fax: +01 508-384-7906 | _______________________________________________________| Allen Theobald writes to shlaer-mellor-users: -------------------------------------------------------------------- Dave Whipp wrote: > If you are using the MFC document-view architecture, then > this gives you a good starting point for developing your > primary architectural domain. And, even though you won't > do an OOA of the GUI: you will probably want to do an OOA > of your "view" and a separate domain for your "document". The "view" and "document" as seperate domains? O.k! Where do the domains appear on the domain chart? Say i'm using (be sure to view using fixed font): "app" / \ / \ ui \ | \ | s/w arch | / \ | / \ win 95 Visual C++, MFC, COM, ATL Oh! And where does MFC, COM, ATL go? Thanks, Allen lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Theobald... > What if you are using MFC/Visual C++ as your UI tool? > > Is the UI domain a service domain, implementation domain, > or architectural domain? Why? I would regard MFC/Visual as two architectural domains: MFC provides a realized library for accessing the OS' window manager and Visual provides a language and development IDE. These are architectural domains because they do not affect the problem space solution to the problem -- I can swap in Rogue Wave and Borland C++ with a new translation engine. > Does it just get a mission statement and no models? No models; this is realized code -- think of it as something you purchased from a third party. As Whipp pointed out, when you are in the Visual IDE, you are effectively acting as a translation engine. Always a mission statement, but in this case pretty trivial because you are merely announcing architectural decisions. > Does it still have to "pass" the domain-replacement rule? The phrasing of these three questions suggest to me that you are reading in more to these things than they deserve. Neither MFC nor the Visual IDE will define what the semantics are of your sundry GUI screens. They merely provide mechanisms for translating your application's screen semantics into code and OS calls. Though the Visual IDE provides an encapsulation of the entire window management, that code still has a gazillion hooks into the application where the semantics of the button labeled "Panic!" is actually implemented. I think you could have a separate UI service domain in the application that would model the application's view of the screen semantics. Such a domain would then invoke the MFC and IDE-generated hooks via wormholes. This has the advantage that it centralizes the conversion between application semantics and window management. In particular, you avoid any temptations in the application to structure sequences of wormhole calls to fit MFC/Visual rather than being generic communications with the GUI. But I am inclined to agree with Whipp that such domains are probably not very exciting -- it would be hard to find active objects since most of the objects would simply be data holders that the bridges read/write from/to. Most of the processing would simply link the application bridges to the MFC or IDE generated bridges -- in effect just a translation. However, I think there is merit in having the UI domain and doing an IM on it because this sorts out the data structures, which helps with specifying the bridges. In this case you short cut the domain when defining the bridges and go directly from the application wormholes to MFC (or interfaces developed in the Visual IDE) via application wormholes. A second advantage is that by doing the IM for the domain you provide a blueprint for using the Visual IDE -- instead of winging it in the IDE to develop a GUI, you effectively have a rote translation task. The downside is that domain IM is translated separately via the Visual IDE so it is easy to get out of synch with the actual bridges when doing maintenance. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Allen Theobald writes to shlaer-mellor-users: -------------------------------------------------------------------- Greetings! Stealing what Patrick Ray and Peter Fontana said in a discussion (way back!) regarding confusion about what is a windowing system, an application, and an operator interface. Application (analyzed) / | / | Operator | Interface | (analyzed) | / \ | / \ | Windowing \ | System \ | (realized) \ | | \| | SW Mechanisms | (Arch; realized) | / \ | / \ Operating Language System (realized) (realized) As Patrick Ray says... 1) The "application" is the thing which organizes services and/or data at the level closest to the abstraction understood by the user of a system, if we can consider the application as a single abstraction. 2) An "Operator Interface" is an organization of the graphical i/o between the application and "windowing system". We can generalize this to user interfaces, and the "windowing system" would become the console services or some such. The "Operator Interface" and "application" are frequently modeled as a single domain; however, if there is ever a need to provide a user interface from different windowing systems, then it becomes very hard to maintain the two as a single domain. 3) A "windowing system" is the thing which puts pixels to the screen and retrieves input from the mouse or keyboard or whatever. It could be considered a part of an operating system. Examples are Windows and X/Motif. I know of no case where the windowing system is not a server. And as Peter Fontana says... "Windowing System" is a domain of RAD-developed MS Visual C++ 4.0 Visual WorkBench classes and controls. Our "main" program function is the VWB-generated thingie - way down in "Windowing System". Flow of control was as you expect: "down" from main, from which a VWB-generated function starts the OOA thread of control - in SW Mech. But this in no way affects how our domain chart looks - the flow of requirements comes from the application and goes down through the service domains." I guess I follow this :), but I am having trouble understanding the difference between "requirements flow" and "control flow". There is no real question here--just a general lack of understanding. So, would anyone care to discuss? Maybe also comment on what it means to model the "Operator Interface" domain. Kind Regards, Allen Theobald Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- Allen Theobald wrote: > I guess I follow this :), but I am having trouble understanding the > difference between "requirements flow" and "control flow". There is > no real question here--just a general lack of understanding. So, > would anyone care to discuss? Maybe also comment on what it means to > model the "Operator Interface" domain. A simple example should help (hopefully this one will :-) ): Consider a requirement: "A user must be able to cancel the operation; but safeguards must be in place to help prevent accidental cancellation" This requirement can be realised in the application domain as an event: "operation cancelled". This would come from a wormhole: "cancel operation". But what about the safeguards. Lets be simple minded and assume this will be a "are you sure?" dialog. This does not need to go in the application domain. Instead, we document the wormhole: "this wormhole should only be invoked if it is certain that the operation should be cancelled". The application domain is the client; the operator interface is the server. The constraint on the wormhole places a requirement on the operator interface: it must allow the use to cancel the operation; and to check that [s]he's sure. Thus we can say that the initial requirement has flowed through the application domain and into the operator interface. Inside the operator interface model, we can imagine a state machine with states such as: operation in progress, user initiated abort, abort confirmed, etc. The progression through the states would probably control the set of dialogs (or whatever) that the user sees. The fact that the operator interface wants to present a set of dialoges (and receive responses) places requirements on the windowing system. (The windowing system is probably not a single domain - the design of the screens (dialogs) is neither part of the operator interface nor part of the basic display layer). The windowing system must detect button presses and send appropriate events to the operator interface. It is possible to trace the flow of requirements from the initial spec, through the application and operator interface layers and into the windowing system. On the other hand, if we attempt to trance the flow of control, we would see events starting at the windowing system and going to the operator interface; the flow then splits: the operator interface changes the on-screen dialog (flow back to the windowing system) and, if appropriate, sends an event (through the wormhole) to the application domain. Hopefully this example sheds a bit of light on the difference between requirements flow and control flow. (BTW, data flow is different again). It should also serve, in a small way, to show what might be modelled in the operator interface domain. Dave. -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. peterf@pathfindersol.com (Peter J. Fontana) writes to shlaer-mellor-users: -------------------------------------------------------------------- At 07:26 AM 9/18/98 +0000, shlaer-mellor-users@projtech.com wrote: >Allen Theobald writes to shlaer-mellor-users: >-------------------------------------------------------------------- >> Our "main" program >> function is the VWB-generated thingie - way down in "Windowing >> System". Flow of control was as you expect: "down" from main, >> from which a VWB-generated function starts the OOA thread of >> control - in SW Mech. >> >> But this in no way affects how our domain chart looks - the flow >> of requirements comes from the application and goes down through >> the service domains." >> >I guess I follow this :), but I am having trouble understanding the >difference between "requirements flow" and "control flow". There is >no real question here--just a general lack of understanding. So, >would anyone care to discuss? Maybe also comment on what it means to >model the "Operator Interface" domain. OK - first: flow of control is a very mechanical concept. Basically its "who calls whom" in the case of a function, or "who sends a message to whom" in the case of an asynchronous message. Either way, the flow of control is an implementation-level concept. You can take a typical scenario for a typical system and follow the flow of control all over the domain chart - up, down, across, etc. For instance, follow the flow of control resulting from a button press in a cellular phone (simplified from a specific real-world case): Physical: human presses button button touches underlying board Electrical: button board sends a conditioned pulse to the A/D input block A/D block feeds a system input bus an interrupt is set for the main processor Software: O/S domain recognizes a pending input interrupt and schedules the appropriate handler in a service task Software Mechanisms domain's interrupt mechanism is activated by the O/S, and it calls the appropriate client's handler service (the Operator Interaction "Button Pressed" service) OI."Button Pressed" calls an appropriate client service (Cellular Telephony "System Incident" service) AND sends an event to it's own Keypad object ... ...as you can see, this flow of control was initially "up" the domain chart, and at one point it split into two paths - one further up, and another "across". Eventually the second path ended up back down in hardware, putting something on a screen. Understanding flow of control is simply following the domino chain of cause and effect through the system at the mechanical level. Flow of requirements is more abstract. This is a domain chart concept where the system-level "product concept" and "user level" requirements are levied on the system - mostly on the application domain. Then, as each domain formulates how it will satisfy its requirements, they turn to their server domains and require services and other capabilities from them. The bridge arrows on the domain chart show this downward flow of requirements from the system-level and application domain to the server domains in the system. In some cases, a pair of domains may clearly interact, but it may be difficult to decide which is the client and which is the server. We select the domain at the higher level of abstraction (closer to the application; further from structs and bits) to be the client, and simply state that requirements always flow downward on the domain chart. In our domain training, we teach that the domain charts should be arranged with higher level of abstraction domains placed higher on the chart. Contrary to earlier convention, our experience shows all domains under development (realized and analyzed) should show their requirements needs on the domain chart - by having bridges OUT to their servers. Only true off-the-shelf domains should appear as a "black box". We've heard of fully realized projects that used domain modeling (but no OOA) with reasonable results. OK - I hope we nailed it: you don't worry about flow of control on a domain chart. Thanks. _______________________________________________________ Pathfinder Solutions Inc. www.pathfindersol.com | 888-OOA-PATH | | effective solutions for software engineering challenges| | Peter Fontana voice: +01 508-384-1392 | peterf@pathfindersol.com fax: +01 508-384-7906 | _______________________________________________________| Allen Theobald writes to shlaer-mellor-users: -------------------------------------------------------------------- Dave Whipp writes to Allen T. in shlaer-mellor-users: > ...The fact that the operator interface wants to present a set of > dialoges (and receive responses) places requirements on the > windowing system. (The windowing system is probably not a single > domain... What domain(s) does it constitute? > ...the design of the screens (dialogs) is neither part of the > operator interface nor part of the basic display layer)... Where, exactly, do these go? On a similar note, is this state model completely out of place in S-M OOA (not counting notation, etc.) for modelling the sequence of inputs outputs in an online system? | |----------------------- ------------------------| V V | | --Initial screen---------------- | | | ^ ^ | | | V V | | V V | Screen A Screen B Screen C Screen D Thanks, Allen lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Theobald... > I guess I follow this :), but I am having trouble understanding the > difference between "requirements flow" and "control flow". There is > no real question here--just a general lack of understanding. So, > would anyone care to discuss? Maybe also comment on what it means to > model the "Operator Interface" domain. Fontana and Whipp provided eloquent and slightly different spins on this. So let me try a third just to ensure that entropy increases. I find it more aesthetically satisfying to think in terms of "communication" rather than "flow of control" in the domain context. The reason I prefer "communication" is that one can have bridge messages that the receiving domain ignores. It also implies a message abstraction that is, in my view, a higher level of abstraction and the DC is about as abstract as you can get without going completely existential. However, the basic issues are the same. Requirements flows are represented by the arrows in the DC. But communication is not represented (except, possibly, in the supporting bridge descriptions). Therefore it doesn't matter where a communication originates (i.e., from which side of the bridge) at the DC level of abstraction. [BTW, some offline mail I sent to you with a mongo .doc file got bounced and I had trouble resending it without getting it uuencoded. (For some obscure reason the mailer kept routing it through DECNet.) I also noticed that the address is not the same as this message. Did you ever get it as a proper attachment?] -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- Allen Theobald wrote: > Dave Whipp writes to Allen T. in shlaer-mellor-users: > > > ...The fact that the operator interface wants to present a set of > > dialoges (and receive responses) places requirements on the > > windowing system. (The windowing system is probably not a single > > domain... > > What domain(s) does it constitute? Who can say? Its all in the OS (or below), so it probably doesn't matter. Domains _might_ include: interaction of windows, manipulation of shapes, rendering polygons, etc. If a 3D graphics card does some of this, then interfaces will be needed; but its still in the OS. Far more interesting is.. > > ...the design of the screens (dialogs) is neither part of the > > operator interface nor part of the basic display layer)... > > Where, exactly, do these go? Well, the actual screen details could be considered as the population of the windowing system. Screen design itself has a set of internal rules. For example, does the "OK" button go to the left, or right, of the "Cancel" button (or above/below). I'm not an expert it this field, so I can't say precisely how the information would be structured. But the word "design" in "dialog design" implies that design effort is required. So it can't be part of the OS. Similarly, it doesn't appear to be part of the operator interface, which is purely a logical view of the interactions between the user and the system. > On a similar note, is this state model completely out of place in S-M > OOA (not counting notation, etc.) for modelling the sequence of inputs > outputs in an online system? > > | |----------------------- > ------------------------| V V | > | --Initial screen---------------- | > | | ^ ^ | | | > V V | | V V | > Screen A Screen B Screen C Screen D It may be OK. I'd be a bit worried that "Screen A" has no output transiations. I would also be concerned that it may be a bit too close to the design. The "Moore" model of state machines can be quite limiting as soon as you start thinking in terms of direct consequences. You start wanting to say things like "on exit from 'initial screen' do this". Which is not possible. If all the events in the state model are generated directly from a wormhole then you'll have problems. If there is an object(s) for each screen then you should try to justify the "manager" object. (In OO, it is often better to distribute behaviour amongst many objects). You might be OK, but I can't say from just one state model. Dave. -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. baryh@airmail.net (Bary Hogan) writes to shlaer-mellor-users: -------------------------------------------------------------------- In general, is it wrong for a domain to access data (through a bridge) from a domain which is above the accessing domain on the domain chart? Would this violate the client-server relationship? Thanks, Bary Hogan LMTAS David Pedlar writes to shlaer-mellor-users: -------------------------------------------------------------------- On the subject of server-client relationships, would I be right in saying that an event that is sent by the client to the server is best named after the action the server will do on receiving it, whereas an event which is sent by a server to a client is best named after the occurence which caused it to be sent? David Pedlar dwp@ftel.co.uk "Michael M. Lee" writes to shlaer-mellor-users: -------------------------------------------------------------------- At 03:19 PM 9/22/98 +0000, you wrote: >baryh@airmail.net (Bary Hogan) writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >In general, is it wrong for a domain to access data (through a bridge) >from a domain which is above the accessing domain on the domain chart? No, it's quite OK. > >Would this violate the client-server relationship? No, the direction of the bridge denotes the flow of requirements not the flow or data or control. This is an important distinction. > >Thanks, >Bary Hogan >LMTAS > -------------------------------- M O D E L I N T E G R A T I O N Model Based Software Development 500 Botany Court Foster City, CA 94404 mike@modelint.com 650-341-2544(v) 650-571-8483(f) --------------------------------- peterf@pathfindersol.com (Peter J. Fontana) writes to shlaer-mellor-users: -------------------------------------------------------------------- At 03:19 PM 9/22/98 GMT, shlaer-mellor-users@projtech.com wrote: >baryh@airmail.net (Bary Hogan) writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >In general, is it wrong for a domain to access data (through a bridge) >from a domain which is above the accessing domain on the domain chart? Yes - for a direct access. >Would this violate the client-server relationship? It violates the separation of subject matter. The correct way to get the data across is to have the domain with the data ("D") provide a service to the domain needing the data ("N"). The alternative - having "N" access an object attribute from "D" directly - requires "N" to know some details about the objects and attributes in "D", and this is bad. You should always be able to pass a form of the substitution test: Imagine that "D" is a realized domain implemented in C code (instead of being an analyzed domain) - now abstract an interface between "D" and "N" that works in both cases. _______________________________________________________ Pathfinder Solutions Inc. www.pathfindersol.com | 888-OOA-PATH | | effective solutions for software engineering challenges| | Peter Fontana voice: +01 508-384-1392 | peterf@pathfindersol.com fax: +01 508-384-7906 | _______________________________________________________| peterf@pathfindersol.com (Peter J. Fontana) writes to shlaer-mellor-users: -------------------------------------------------------------------- At 04:59 PM 9/22/98 +0100, shlaer-mellor-users@projtech.com wrote: >David Pedlar writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >On the subject of server-client relationships, > >would I be right in saying that an event that is sent >by the client to the server is best named after the >action the server will do on receiving it, > >whereas an event which is sent by a server to a client is >best named after the occurence which caused it to be >sent? Actually events should not be sent between domains at all. This would cause you to fail the substution test, where you would remove the analyzed domain that receives the event ("R") and substitute in a realized (hand coded) domain. Instead have "R" publish an appropriate service to allow other domains to inform it of incidents in the system. Then this service can send an event, and do whatever else needs to be done. _______________________________________________________ Pathfinder Solutions Inc. www.pathfindersol.com | 888-OOA-PATH | | effective solutions for software engineering challenges| | Peter Fontana voice: +01 508-384-1392 | peterf@pathfindersol.com fax: +01 508-384-7906 | _______________________________________________________| peterf@pathfindersol.com (Peter J. Fontana) writes to shlaer-mellor-users: -------------------------------------------------------------------- At 03:19 PM 9/22/98 GMT, shlaer-mellor-users@projtech.com wrote: >baryh@airmail.net (Bary Hogan) writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >In general, is it wrong for a domain to access data (through a bridge) Whoa - I apologize for my earlier post on this - I missed the "through a bridge" section (obvoiusly). Mike Lee's answer is right on. Sorry. _______________________________________________________ Pathfinder Solutions Inc. www.pathfindersol.com | 888-OOA-PATH | | effective solutions for software engineering challenges| | Peter Fontana voice: +01 508-384-1392 | peterf@pathfindersol.com fax: +01 508-384-7906 | _______________________________________________________| Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- Michael M. Lee wrote: "Michael M. Lee" writes to shlaer-mellor-users: >baryh@airmail.net (Bary Hogan) writes to shlaer-mellor-users: > > >In general, is it wrong for a domain to access data (through a bridge) > >from a domain which is above the accessing domain on the domain chart? > > No, it's quite OK. One note of caution: If the server domain is constructed to get data "from client Foo"; and if the attribute-domain of the data is the same in both domains; then this implies that you have the same concept in two domains. You should check that you don't have any domain pollution. If, however, the server just "needs some data to do its job"; and uses a wormhole to get that data: then the fact that the wormhole is bridged to a client is not relevant. Dave. -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. David Pedlar writes to shlaer-mellor-users: -------------------------------------------------------------------- > >would I be right in saying that an event that is sent > >by the client to the server is best named after the > >action the server will do on receiving it, > Actually events should not be sent between domains at all. > Instead have "R" publish an appropriate service to allow other domains to > inform it of incidents in the system. Then this service can send an event, > and do whatever else needs to be done. So how would you choose a name for the service? David Pedlar dwp@ftel.co.uk peterf@pathfindersol.com (Peter J. Fontana) writes to shlaer-mellor-users: -------------------------------------------------------------------- At 09:14 AM 9/23/98 +0100, shlaer-mellor-users@projtech.com wrote: >David Pedlar writes to shlaer-mellor-users: >-------------------------------------------------------------------- >> Actually events should not be sent between domains at all. > >> Instead have "R" publish an appropriate service to allow other domains to >> inform it of incidents in the system. Then this service can send an event, >> and do whatever else needs to be done. > > >So how would you choose a name for the service? I'll take a shot in the dark and see if a random example can illustrate. If a Hardware Interface (HWIF) domain detects a keypress on some keypad, and needs to inform Operator Interface (OI), then OI publishes a ButtonInputNotify service - which HWIF calls. In some cases there are many different types of external or system-level incidents that a domain must know about - such as an error notification to the Error Logging domain. If many different incidents cause the same response in a domain, then a single, more general service such as ErrorIncidentNotify, is published by that domain. One of the input parameters may be a system-level enumerate identifying the particular indicent type. _______________________________________________________ Pathfinder Solutions Inc. www.pathfindersol.com | 888-OOA-PATH | | effective solutions for software engineering challenges| | Peter Fontana voice: +01 508-384-1392 | peterf@pathfindersol.com fax: +01 508-384-7906 | _______________________________________________________| lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Pedlar... I seem to have missed the original message, so I am responding to the context of Fontana's response. > > >On the subject of server-client relationships, > > > >would I be right in saying that an event that is sent > >by the client to the server is best named after the > >action the server will do on receiving it, > > > >whereas an event which is sent by a server to a client is > >best named after the occurence which caused it to be > >sent? > > Actually events should not be sent between domains at all. This would cause > you to fail the substution test, where you would remove the analyzed domain > that receives the event ("R") and substitute in a realized (hand coded) domain. > > Instead have "R" publish an appropriate service to allow other domains to > inform it of incidents in the system. Then this service can send an event, > and do whatever else needs to be done. As a clarification, an event is directed to an object instance. Since one domain is expressly prohibited from having carnal knowledge of objects in other domains, it is not possible to send an event directly from one domain to another -- the sending domain should not even be aware that a specific object exists, much less a specific instance of that object. Prior to the wormholes paper some tools supported bridges by placing a surrogate object in the domain that represented bridges to other domains. Events could then be modeled in the OOA as being sent to or from those surrogate objects. During translation these surrogate objects would be handled specially and the events could be properly translated by the bridge. Unfortunately this is not a completely clean way to handle this because the domain is polluted by the surrogate objects. One can envision placing the domain into another application where the original bridges were split or combined because the clients/services were differently conceived (i.e., the domain chart was different). In this case one would have to modify the surrogate objects within the domain OOA. With the wormholes paper the bridge concept has been refined to that there are no events, per se, between domains because the client always invokes a synchronous wormhole. However, the service domain's wormhole can produce an event in that domain. The developer connects the client's wormhole to the service's wormhole as a bridge and the architecture provides the infrastructure for mapping events, responses, etc. One practical way to think about this is that a domain "owns" one half of a bridge. It defines synchronous services that form an API to the domain. This is the external view of the domain. When a bridge is established between those two domains, the developer provides the glue to link the two domain APIs in the appropriate manner. If the APIs match up nicely, this becomes trivial; if they don't match up nicely, it can be tricky. Now each domain's synchronous services can Do the Right Thing. If an incoming wormhole requires functionality, an event is placed on the domain's queue. If the incoming wormhole wants attribute data (or values calculated from attribute data), that can be extracted and returned synchronously. If the expectation is that after processing an event the domain should return data, then the domain will invoke an outgoing wormhole (synchronous service) to do so when processing is complete. The trick is that the architecture has to provide an infrastructure (the transfer vector) that connects the incoming wormhole to the outgoing wormhole so that the proper API information can be supplied to the other domain with the response (i.e., the instance identifier is saved). When the client's incoming wormhole is invoked for the returned data, it will place the appropriate response event on the client's queue. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Allen Theobald writes to shlaer-mellor-users: -------------------------------------------------------------------- Peter J. Fontana wrote: > Actually events should not be sent between domains at all. This > would cause you to fail the substution test, where you would remove > the analyzed domain that receives the event ("R") and substitute in > a realized (hand coded) domain. > > Instead have "R" publish an appropriate service to allow other > domains to inform it of incidents in the system. Then this service > can send an event, and do whatever else needs to be done. Whoa! Wait a minute! What does this mean? I always assumed, obviously incorrectly, that the OCM had communications between objects of different domains as well. I need an example, please. I sometimes wonder if I have the necessary abstract thought processes to understand this stuff... :^) Kind Regards, Allen Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- David Pedlar wrote: > > Instead have "R" publish an appropriate service to allow other domains to > > inform it of incidents in the system. Then this service can send an event, > > and do whatever else needs to be done. > > So how would you choose a name for the service? I don't htink there is any firm rule. As with many things, consistancy can be more important than the rule itself. I identify two basic types of service: making requests and providing information. Services have senders and receivers; so 4 basic naming rules are required. Services are always named from the perspective of the local domain, never the remote. Naming events is much easier: I always try to use the past tense - an event is something that has happenned. Consider a request (or command). The sending domain knows that it wants something done; but it doesn't know who'll provide the service. The naming of the request carries that connotation: it is asking a mediator. For example: "Request Abort Operation". The receiver, OTOH, receives the request as an imperative command: "Abort Operation". However, internally a more passive tone may be used for the event: "Abort Requested". A request for information could be "Get ETA" (and could have the same name in the receiver if it is synchonous; otherwise "get" can be replaced with "provide", "supply" or "send" -- be consistent!) Of course, as there are two different domains, there may be an additional semantic shift in the naming of the services. The other type of service is the provision of information, solicited or otherwise. Here, the tone used by both sender and receiver is the same. There is no naming distinction between the sender and receiver of the information. Naming differences are purely due the the semantic shift between domains. There are two basic types of information: discrete events and continuous state. Continuous-variable information is not supported by shlaer-mellor. It must be polled by an outgoing request; or quantised by an external device. (Note that a discrete event may represent a change of continuous-state). Discrete-event information is named in the same way as events - in the past tense. For example: "Nuclear Reactor has Exploded". Continuus-state information is named in the present tense. For example: "Nuclear Reactor is Overheating", "Telephone is Ringing" or "ETA is: 5 minutes". The last example shows that the naming can work even when parameters are involved (and that 'state' doesn't have to last very long). The wormhole is "ETA is" which takes a parameter of type duration The rules, essentially, ensure that the naming is gramatically correct both when the information is sent, and when it is received. However, the naming is in English; and thus no rule can be applied too rigedly. It is also true that other people are successful is using other conventions. Some people like the word "Notify" for all information services. Dave. -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. David Pedlar writes to shlaer-mellor-users: -------------------------------------------------------------------- > > >On the subject of server-client relationships, > > > > > >would I be right in saying that an event that is sent > > >by the client to the server is best named after the > > >action the server will do on receiving it, > > > > > >whereas an event which is sent by a server to a client is > > >best named after the occurence which caused it to be > > >sent? > > Lahman said- > With the wormholes paper the bridge concept has been refined to that there are no > events, per se, between domains because the client always invokes a synchronous > wormhole. Is it always the client that does the invoking? > However, the service domain's wormhole can produce an event in that > domain. The developer connects the client's wormhole to the service's wormhole as > a bridge and the architecture provides the infrastructure for mapping events, > responses, etc. Does the wormhole have a name, and is the choice of its name affected by whether the receiving domain is client/server in relation to the calling domain? My original question was meant to be about the problem of naming instances of the 'things' that allows communication between domains. However because I said 'event' when I should have said 'service' or 'wormhole', the point of my question seems to have been overlooked. David Pedlar dwp@ftel.co.uk peterf@pathfindersol.com (Peter J. Fontana) writes to shlaer-mellor-users: -------------------------------------------------------------------- At 08:25 AM 9/23/98 +0000, shlaer-mellor-users@projtech.com wrote: >Allen Theobald writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >Peter J. Fontana wrote: > >> Actually events should not be sent between domains at all. >Whoa! Wait a minute! What does this mean? I always assumed, >obviously incorrectly, that the OCM had communications between objects >of different domains as well. HS's recent post on this topic nicely explains some of the issues with inter-domain events, and how the evolution of the wormhole eliminates the "need" for them. In our training and practice, Pathfinder positions the OCM with a single domain focus, and only shows bridge service invocations between outside domains and the subject domain. >I need an example, please. OK - there's the InputBoardInterface (IBIF) domain that is responsible to detect keypresses on a keypad, the AudioHandler (AH) domain that can make sounds, and an OperatorInterface (OI) domain for which we will show a highly simplified OCM. Pathfinder teaches OCMing on a scenario basis, so we will show the OCM for the pressing of a single button ("NEXT") that causes a "tic" keypress sound and a "Dialog" transition: OCM for OI scenario: "NEXT" key press | | 1)OI.ButtonDetected ________V______________ | OperatorInteraction | // this represents services of OI ----------------------- | | // flows out of the domain symbol represent 2)KP2:ButtonPressed \ // the actions of the invoked services / \ 3)DLG4:Transition _______V _V______ | KeyPad | | Dialog |<--. -------- -------- \ | 4) AH.SoundTic | | | \_____/ _____V__________ 5) DLG2:Activate | AudioHandler | ---------------- INCIDENT NOTES: 1) service invocation from IBIF 2) event to KeyPad object indicating a key was pressed 3) event to Dialog indicating a dialog transition is needed 4) service invocation to cause button tic tone from AudioHandler server domain 5) currently active Dialog instance sends Activate event to next Dialog instance NOTE - OI.ButtonDetected (1) and AH.SoundTic (4) are both bridge service invocations across domain boundaries - not OOA event transmissions. _______________________________________________________ Pathfinder Solutions Inc. www.pathfindersol.com | 888-OOA-PATH | | effective solutions for software engineering challenges| | Peter Fontana voice: +01 508-384-1392 | peterf@pathfindersol.com fax: +01 508-384-7906 | _______________________________________________________| Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- Dave Whipp wrote: [...] Sorry to follow my own post: I used the term "past tense"; I should have included the possibility for the pre-present tense at the same time. The examples were OK though. Dave. -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. David Pedlar writes to shlaer-mellor-users: -------------------------------------------------------------------- > > So how would you choose a name for the service? Dave Whipp wrote: > I identify two basic types of service: making requests and > providing information. Services have senders and receivers; > so 4 basic naming rules are required. Can we say that when a client initiates communication to a server, that it would always be a command rather than information? And conversely that a server would never send a command to a client? David Pedlar dwp@ftel.co.uk Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- David Pedlar wrote: > Can we say that when a client initiates communication > to a server, that it would always be a command rather > than information? > And conversely that a server would never send a > command to a client? No. I'm sure you can think of examples where an application provides information to a user interface (i.e. client sends information to server). Conversely, the user interface may send requests to the application. e.g. "Go", "Abort" Dave. -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Pedlar... > > With the wormholes paper the bridge concept has been refined to that there are no > > events, per se, between domains because the client always invokes a synchronous > > wormhole. > > Is it always the client that does the invoking? I think this is best clarified with a more specific example. Consider a client domain that requests a service domain to do something. One scenario (there are others depending upon the architectural mechanisms chosen) might go as follows. (1) The client domain invokes a wormhole synchronous service to request the service domain to do something. Since the client expects an asynchronous return of data, a transfer vector is supplied that describes to whom the returning event should be sent. (This transfer vector is part of the bridge translation; the developer could use coloration to determine identify the target instance for the response event.) (2) The service domain provides an API of synchronous services that can deal with the request. The bridge might be implemented by simply placing the appropriate service domain API call in the client's request wormhole synchronous service (see next question). The arguments would be the transfer vector handle and any relevant data. (3) The service domain's request synchronous service stores the handle of the transfer vector with some cross reference mechanism to a return wormhole synchronous service in the service domain that will be the response when processing is complete. Then the synchronous service places the appropriate event on the queue and returns. (4) The client's request wormhole then returns and the client goes about its business. Meanwhile the service eventually processes the event. (5) When the service domain finishes processing a response wormhole synchronous service will be invoked somewhere in the domain's state machines. This is essentially the reverse of Step (2). At that point the bridge matches that wormhole to an API synchronous service in the client domain. The bridge also notes that the response wormhole is linked to the request wormhole, so it fetches the stored handle to the transfer vector. The appropriate client API synchronous service is invoked with arguments for the transfer vector handle and the response data. (6) The client's response wormhole accepts the data and the handle to the transfer vector. It looks up the transfer vector and identifies the recipient instance. It then places the proper event on the queue targeting the instance and carrying the response data. In this case the client/service relationship remains intact throughout. But the invoking of sundry wormhole synchronous services is done from both directions because this is merely a communication mechanism. However, one can take this to a higher level of abstraction. When speaking of bridges the client/service relationship does not always follow domain chart's mapping of "service" domains. Consider a GUI based application. The message loop is probably buried down in some implementation or architectural domain. Yet every action the application takes is in response to messages generated in that domain that are passed up the DC. Thus that low level domain is the client and the application is the service. The point I am belaboring here is that client/service in the context of bridges reflects communications while client/service in the context of domains on the DC reflects requirements flows. > Does the wormhole have a name, and is the choice of its name > affected by whether the receiving domain is client/server in > relation to the calling domain? > > My original question was meant to be about the problem of naming > instances of the 'things' that allows communication between domains. > However because I said 'event' when I should have said 'service' > or 'wormhole', the point of my question seems to have been > overlooked. The short answer is Yes. The wormhole on the ADFD has an identifier just like any other process. Since most architectures will map this into a synchronous service function, the identifier correlates with the function name. The long answer is that the wormhole is an abstraction and it does not necessarily have to be implemented as a function, so the identifier is an abstraction of the wormhole identification mechanism. Though there is an identifier associated with the wormhole, it has meaning only in the invoking domain. It is up to the bridge to provide the connection between that wormhole and a corresponding synchronous service in the other domain -- only the bridge understands both sets of synchronous services that form the two domain's APIs. Consider domains A and B (ignoring xfer vectors). In Domain A: bridge_call_A1 (args) // this is the wormhole designated A1 { // this is pseudo code provided by the bridge REF DOMAIN B // bridge requests B's context from architecture bridge_call_B5 (args) // invoke B's request wormhole } (Note that REF DOMAIN B is simply a place holder for an implementation mechanism, such as an RPC, that allows bridge_call_B5 to be invoked.) In Domain B: bridge_call_B5 (args) // this is the domain's bridge API { // pseudo code to place event on queue } In this particular scheme the bridge is represented by the actual code in the bridge_call_A1 and bridge_call_B5 routines. Inside Domain A, only the bridge_call_A1 stub is visible and the code in bridge_call_B5 can only see entities in Domain B. Note that to port Domain A to another application, the bridge_call_A1 routine would be rewritten but no rewrite of bridge_call_B5 would be necessary to port Domain B to another application. In this particular scheme, the only bridge recoding for porting occurs in the request wormholes for _communication_ client bridges. One could argue that in this scheme the real bridge was the bridge_call_A1 code and that bridge_call_B5 is simply translation support that defines the service interface to Domain B. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Sam Walker writes to shlaer-mellor-users: -------------------------------------------------------------------- __________________________ Sam Walker Software Engineer Advanced Technology Division Tait Electronics Ltd Phone (64) (03) 358 6683 Fax (64) (03) 358 0432 >>> Dave Whipp 24/September/1998 07:45pm >>> Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- David Pedlar wrote: >> Can we say that when a client initiates communication >> to a server, that it would always be a command rather >> than information? >> And conversely that a server would never send a >> command to a client? >No. I'm sure you can think of examples where an application >provides information to a user interface (i.e. client sends >information to server). > Conversely, the user interface may send requests to the > application. e.g. "Go", "Abort" I would be very surprised to see a User Interface issue 'commands' to an application. I would expect the User Interface to provide indications or warnings to the application e.g. "User Requested GO", "User Requested ABORT". Sam lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Walker... > > Conversely, the user interface may send requests to the > > application. e.g. "Go", "Abort" > > I would be very surprised to see a User Interface issue > 'commands' to an application. I would expect the User Interface > to provide indications or warnings to the application e.g. "User > Requested GO", "User Requested ABORT". At one level I agree with you that it is a good idea to identify messages emitted from service domains with language that preserves the perspective of being a service -- where practical. However, I think there are some things that mitigate the breach of that practice. First, a quibble. Whipp used the term "requests" and you inferred the term "commands". I could argue that there is a distinction in that requests can be ignored but commands can't be ignored. If one already has the mindset that messages between domains are requests rather than commands, then the tone of the language has less relevance. Second, at the level of individual messages in a bridge we are dealing with communications. In that context it is useful to think of the client as the domain issuing a message and the service as the domain responding to the message. This is a different level of abstraction than the DC itself where client and service are defined in terms of large scale requirements flows. In the communication context any domain can be a client so requests can emanate from anywhere. Third, I think it depends upon how one defines the nature of the service. I believe one could legitimately define the mission of the User Interface to be something like, "A service that retrieves User commands...". If so, then the message is, indeed, a command and it is fair to identify it in that tone. Fourth, in the case of a User Interface domain _every_ outgoing request message would be prepended with "User Requested...". This strikes me as superfluous information if it applies to the entire set. Finally, for the examples, it is probably quite clear in the problem space that when the user hits a control the desired outcome is for the application to do something specific. That is, "Go" and "Abort" have clear problem space semantics. By qualifying them to achieve a particular tone one may be obscuring those recognized semantics. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Sam Walker writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- >Responding to Walker... >> > Conversely, the user interface may send requests to the >> > application. e.g. "Go", "Abort" > >> I would be very surprised to see a User Interface issue >> 'commands' to an application. I would expect the User Interface >> to provide indications or warnings to the application e.g. "User >> Requested GO", "User Requested ABORT". >At one level I agree with you that it is a good idea to identify >messages emitted from service domains with language that >preserves the perspective of being a service -- where practical. >However, I think there are some things that mitigate the breach >of that practice. >First, a quibble. Whipp used the term "requests" and you >inferred the term "commands". I could argue that there is a >distinction in that requests can be ignored but commands can't >be ignored. By definition. >If one already has the mindset that messages between domains >are requests rather than commands, then the tone of the > language has less relevance. Replacing the orginal statement with request does not lessen the statement. Specifying the conditionality of fulfilling the command/request does not necessarily imply anything of the client/server relationship. >Second, at the level of individual messages in a bridge we are > dealing with communications. In that context it is useful to think > of the client as the domain issuing a message and the service as > the domain responding to the message. This is a different level > of abstraction than the DC itself where client and service are > defined in terms of large scale requirements flows. In the > communication context any domain can be a client so requests > can emanate from anywhere. Sounds like possible domain pollution if the user interface makes requests to an application. I would like to clarify that in the example it is the user making the request to the application, not the user interface. The user interface is just a simple transport layer between the application and the user. >Third, I think it depends upon how one defines the nature of > the service. >I believe one could legitimately define the mission of the User > Interface to be something like, "A service that retrieves User > commands...". If so, then the message is, indeed, a command > and it is fair to identify it in that tone. >Fourth, in the case of a User Interface domain _every_ outgoing >request message would be prepended with "User >Requested...". This strikes me as superfluous information if it >applies to the entire set. By ommiting these details may cause the communication to be misleading. 'User requested go' is a lot more meaningful than 'Go', and relies on less intuition for a reviewer/reader. The real meaning of the communication is 'the go button has been pressed' from the perspective of the User Interface. >Finally, for the examples, it is probably quite clear in the > problem space that when the user hits a control the desired > outcome is for the application to do something specific. That is, > "Go" and "Abort" have clear problem space semantics. By > qualifying them to achieve a particular tone one may be > obscuring those recognized semantics. By not introducing this semantic shift in the context of different subject matter is really a form of pollution. You can lose reusability of these service domains by having names/concepts within non-application domains which are infact specific to the application. Sam. __________________________ Sam Walker Software Engineer Advanced Technology Division Tait Electronics Ltd Phone (64) (03) 358 6683 Fax (64) (03) 358 0432 smf@cix.compulink.co.uk (Mike Finn) writes to shlaer-mellor-users: -------------------------------------------------------------------- Would anybody like to report on last weeks SMUG 98 Conference that took place in Cheltenham (UK). Mike -- Mike Finn Dark Matter | Email: smf@cix.co.uk Systems Ltd | Voice: +44 (0) 1483 755145 lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Walker... > Sounds like possible domain pollution if the user interface makes > requests to an application. I believe this is the core issue of our disagreement. I do not see any domain pollution inherent in one domain making a request of another domain. In fact, I see prepending 'User Requested' as a form of pollution. More below. > I would like to clarify that in the > example it is the user making the request to the application, not > the user interface. The user interface is just a simple transport > layer between the application and the user. True. But the application doesn't know whether the user made that request directly or through a service domain. We develop VXI device drivers. The VXI spec requires that we provide a GUI that the user (a test program developer) can use to figure out how the driver works. However, in production there is no user because the driver is invoked programmatically. The semantics of the requests to the application are the same in both environments. In one case there is a User Interface domain that relays requests directly from a user. In the other there is no user; only the user's program. > By ommiting these details may cause the communication to be > misleading. 'User requested go' is a lot more meaningful than > 'Go', and relies on less intuition for a reviewer/reader. > The real meaning of the communication is 'the go button has been > pressed' from the perspective of the User Interface. > By not introducing this semantic shift in the context of different > subject matter is really a form of pollution. You can lose > reusability of these service domains by having names/concepts > within non-application domains which are infact specific to the > application. If you moved the application into a context where there was no User Interface domain and the 'Go' was sent directly to the Application domain, would you still want to call it 'User Requested Go'? Even if there was no 'user' in the normal sense in the new environment? Returning to my VXI example above, the 'Go' message in the development environment would have to be called 'User Requested Go'. But in the production environment the message would have to be called 'Test Program Requested Go' because the test program is the "user" rather than the test program developer. This seems to me to be the real pollution that precludes reuse because it could be eliminated by choosing the generic 'Go' understood by the Application domain and letting only the User Interface domain worry about a specific type of user. Put another way, the Application domain understands what 'Go' means in the context of its subject matter. But it does not know nor care about the environment from which the 'Go' came. By including 'User Requested' one pollutes this semantic with a characteristic of a particular subset of environments where the Application domain might be used. Now, having said all this, I think we are perhaps not so far apart in reality. A bridge has two parts: the side that initiates a message and the side that accepts the message. In reality, the UI domain might invoke a wormhole called 'User Requested Go'. The bridge would then translate this into the message that the Application domain wants to hear, 'Go'. So it may well be that our differing views are due to our standing on different ends of the bridge. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Daniel Dearing writes to shlaer-mellor-users: -------------------------------------------------------------------- Mike, Since you asked, I am kind of new to this group (I joined after the SMUG 98 conference) so I apologise in advance for anything I say which has already been said or which upsets anyone! :-) I went along to the conference and I was generally impressed with the quality of speakers and the organisation of the event (Nice one Karen). Having been away from the Shlaer-Mellor arena for a while, I found it interesting to note that the method, and the focus and general haven't changed very much, but people are talking more eloquently about complex issues of OOA/RD and there have been some interesting extensions to the method. The concept of wormholes is not something that I have come across before. I also find it interesting to note that there seemed to be a marked recognition at the SMUG conference that UML is not going to go away without a fight and that "resistance is futile". The general message seemed to be "if you can't beat 'em, join 'em", and there were some interesting presentations about how UML can relate to Shlaer-Mellor. My own personal opinion was that there was a healthy competitive tone between the SM and UML camps, and that there seemed to be a readiness to take on board some of the useful concepts of UML. I felt that one or two of the presentations were ever so slightly over-biased against UML though. I felt that one or two of the claims suggesting that, for instance, using UML will inherently tie your architecture design into your application design in a way that prevents you from separating the two, were a little unfair. There was some entertaining and informative content about how to avoid "going down in flames" on a SM project and how to minimise some of the risk involved in OOA/RD developments. Coupled with some case studies, there was something there for the managers as well as the techies. I came away feeling that I still believe that SM is a better solution technically than UML in many cases, but I can't help being concerned that the weigth behind UML wil introduce the old VHS versus Betamax factor where the solution with the strongest marketing wins out at the end of the day. Anyway, I enjoyed it, and hope to go again next year. It was interesting to note that Project Technology weren't there though. Best Regards, Daniel Dearing Plextek Limited Communications Technology Consultants >>> Mike Finn 25/09/98 13:27:00 >>> smf@cix.compulink.co.uk (Mike Finn) writes to shlaer-mellor-users: -------------------------------------------------------------------- Would anybody like to report on last weeks SMUG 98 Conference that took place in Cheltenham (UK). Mike -- Mike Finn Dark Matter | Email: smf@cix.co.uk Systems Ltd | Voice: +44 (0) 1483 755145 bruce.levkoff@cytyc.com (Levkoff, Bruce) writes to shlaer-mellor-users: -------------------------------------------------------------------- > Daniel Dearing writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > > There was some entertaining and informative content about how to avoid > "going down in flames" on a SM project and how to minimise some of the > risk involved in OOA/RD developments. Coupled with some case studies, > there was something there for the managers as well as the techies. > Could you elaborate on what was said about risk and flaming out? Bruce Bruce Levkoff Principal Software Engineer Cytyc Corporation 85 Swanson Rd. Boxborough,MA 01719 (P) 978-266-3033 (F) 978-635-1033 Tracy Morgan writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Mike Finn > >Would anybody like to report on last weeks SMUG 98 Conference that >took place in Cheltenham (UK). > > >Mike > >-- >Mike Finn >Dark Matter | Email: smf@cix.co.uk >Systems Ltd | Voice: +44 (0) 1483 755145 My colleague, Chris Raistrick, a speaker at this year's Conference, will be working on a SMUG 98 Review next week. All things being equal, a report from us should be available on the Kennedy Carter web site (www.kc.com) by the end of next week. Obviously, direct user feedback is invaluable and our report is not meant to be a substitute for that. Tracy *********************************************************************** Tracy Morgan tel : +44 1483 483200 Kennedy Carter Ltd fax : +44 1483 483201 14 The Pines web : http://www.kc.com Broad Street email : tracy@kc.com Guildford GU3 3BH UK "We may not be Rational but we are Intelligent" ************************************************************************ Sam Walker writes to shlaer-mellor-users: -------------------------------------------------------------------- __________________________ Sam Walker Software Engineer Advanced Technology Division Tait Electronics Ltd Phone (64) (03) 358 6683 Fax (64) (03) 358 0432 >>> lahman 26/September/1998 01:25am >>> lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- > Responding to Walker... >> Sounds like possible domain pollution if the user interface makes >> requests to an application. > I believe this is the core issue of our disagreement. I do not > see any domain pollution inherent in one domain making a > request of another domain. In fact, I see prepending 'User > Requested' as a form of pollution. More below. > True. But the application doesn't know whether the user made > that request directly or through a service domain. We develop > VXI device drivers. > The VXI spec requires that we provide a GUI that the user (a > test program developer) can use to figure out how the driver > works. However, in production there is no user because the > driver is invoked programmatically. The semantics of the > requests to the application are the same in both environments. > In one case there is a User Interface domain that relays > requests directly from a user. In the other there is no user; > only the user's program. An interesting concept. A domain unaware it is communicating to a client domain through a service domain. >> By not introducing this semantic shift in the context of different >> subject matter is really a form of pollution. You can lose >> reusability of these service domains by having names/concepts >> within non-application domains which are infact specific to the >> application. >If you moved the application into a context where there was no User >Interface domain and the 'Go' was sent directly to the > Application domain, would you still want to call it 'User > Requested Go'? Even if there was no > 'user' in the normal sense in the new environment? > Returning to my VXI example above, the 'Go' message in the > development environment would have to be called 'User > Requested Go'. But in the production environment the > message would have to be called 'Test Program > Requested Go' because the test program is the "user" rather > than the test program developer. This seems to me to be the > real pollution that precludes reuse because it could be > eliminated by choosing the generic > 'Go' understood by the Application domain and letting only the > User Interface domain worry about a specific type of user. > Put another way, the Application domain understands what 'Go' > means in the context of its subject matter. But it does not know > nor care about the environment from which the 'Go' came. By > including 'User Requested' one pollutes this semantic with a > characteristic of a particular subset of environments where the > Application domain might be used. I never specified whether the user was a test user, or an end user. I felt that the word 'user' was fairly generic, and so I don't believe this pollutes this communication at all. > Now, having said all this, I think we are perhaps not so far apart > in reality. A bridge has two parts: the side that initiates a > message and the side that accepts the message. In reality, the > UI domain might invoke a wormhole called 'User Requested > Go'. The bridge would then translate this into the message that > the Application domain wants to hear, 'Go'. So it may well be > that our differing views are due to our standing on different > ends of the bridge. I absolutely agree with this. So as a comment to the original naming question ... As the communication passes through a bridge it undergoes a semantic shift. Although, from what Lahman has stated it does more than just this. e.g. Applying this to the example, the User Interface generates 'User Requested Go', this not only changes to 'Go' but also shifts the message so that it appears as though it eminated from above the Application, making the SAP (Service Access Point) virtual. Sam Daniel Dearing writes to shlaer-mellor-users: -------------------------------------------------------------------- Bruce, (apologies if this email rambles on a bit) I attended two presentations entitled "Going Down in Flames" by Mike Lee and Leon Starr, and "How to Seize the High Ground and Get Your Models Completed" by Leon Starr. Although I could not hope to do justice to the delivery of the double act formed by Mike and Leon, I will summarise the main points as I saw them. I would recommend a look at the KC web page when the review is posted though. Going Down In Flames This presentation was about a collection of projects that had gone badly, and attempted to offer an insight into why they went wrong. The first project was an Ultrasound Diagnostics Medical Instrument. Having Successfully developed a first generation product which cornered a large chunk of the market, the challenge was to develop a second generation product. The team first ran into troubles when they partitioned the domains and very quickly territorialised these domains, making the domain boundaries very inflexible and rendering communication between teams more difficult than it should have been. The next problem was that they were modelling complex paramaters and rules in a service domain, thereby neglecting the actual application domain. They could have easily modelled simple objects and behaviour in the application domain instead. Combined with these problems were management problems in terms of unrealistic schedules, frequent changes in management (the hardware manager ended up running the development) and limited control over some engineers (inflated egos on the strength of their earlier success). The end result was that the overall project took twice as long as was originally intended. The next disaster seemed to be a result of too many managers and not enough engineers. There were simply too many domains with too high a level of complexity to be modelled by the small number of engineers available. The requirements were not constrained and engineers were able to introduce new or perceived requirements, which resulted in delays in the completion of their models. The lack of results caused a loss of management confidence and a scaling down of the development to 25% of the original. Management eventually banned the use of SM, and the engineers faced with lots of models, and no better way of completing the project had to take the modelling "underground". The third project also suffered from mis-management in the form of a weak manager who didn't understand the process. There was no requirements policy and there was excessive complexity in the models. The later was found to be largely due to having a person leading the modelling effort who didn't understand the modelling language and exhibited "case-by-case" thinking - (s)he modelled (her)his way out of each problem in the model in isolation, without taking a step back and looking at the overall model. This project was actually recovered by developing a new domain chart, re-organising the labour (the lead modeller was effectively sidelined), linking the requirements to phased releases and instituting a formal model review process. The lesson learned: (Leon, Mike - forgive me from quoting directly from your slides) Ongoing Committment and vision required Start with a domain chart - and update it Be careful who leads the modelling effort Start the architecture early Control incoming requirements Even the worst disasters can be repaired (but expert help costs less BEFORE a crisis) How To Seize The High Ground and Get Your Models Complete This presentation was really an extension of some of the lessons learned in the previous presentation. The Phased Iteration approach was suggested for the development of a product. It offers the following benefits: Morale boost with each release The process is tested Competency routinely demonstrated Early feedback on requirements Scale and difficulty of work established early Changes in technology can be incorporated Additionally, the importance of requirements management was stressed and the policy of fixing requirements into phased releases to avoid missing milestones due to "feature creep" Both presentations were entertaining and useful. Not only were Mike and Leon good presenters, bu they also play a mean game of human table football. You might even see some pictures on the KC web site when the review is published I hope this perspective is useful Regards, Daniel Dearing Plextek Limited Communications Technology Consultants >>> Levkoff, Bruce 25/09/98 16:00:13 >>> bruce.levkoff@cytyc.com (Levkoff, Bruce) writes to shlaer-mellor-users: -------------------------------------------------------------------- > Daniel Dearing writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > > There was some entertaining and informative content about how to avoid > "going down in flames" on a SM project and how to minimise some of the > risk involved in OOA/RD developments. Coupled with some case studies, > there was something there for the managers as well as the techies. > Could you elaborate on what was said about risk and flaming out? Bruce Bruce Levkoff Principal Software Engineer Cytyc Corporation 85 Swanson Rd. Boxborough,MA 01719 (P) 978-266-3033 (F) 978-635-1033 lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Dearing... > I also find it interesting to note that there seemed to be a marked > recognition at the SMUG conference that UML is not going to go away > without a fight and that "resistance is futile". The general message > seemed to be "if you can't beat 'em, join 'em", and there were some > interesting presentations about how UML can relate to Shlaer-Mellor. > > My own personal opinion was that there was a healthy competitive tone > between the SM and UML camps, and that there seemed to be a readiness to > take on board some of the useful concepts of UML. I felt that one or two > of the presentations were ever so slightly over-biased against UML > though. I felt that one or two of the claims suggesting that, for > instance, using UML will inherently tie your architecture design into > your application design in a way that prevents you from separating the > two, were a little unfair. FWIW, my perception of the state of UML vs. S-M is that the underlying methodologies are still quite different. This is especially true now that schools associated with UML have developed that define objects in terms of functionality rather than data (e.g., an object is a collection of responsibilities). I suspect this stems from using use cases directly as an _initial_ design mechanism (i.e., to develop the class models), resulting in Actors, Controllers, etc. objects This can also be seen at the level of state models: in S-M we tend to think in terms of event communication with a queue manager in the architecture while in the UML world FSM transitions are usually implemented directly as action function calls. On the notation front, S-M's notation can be expressed as a subset of UML (though it is still not clear to me how referential attributes are handled -- they seem to have only cursory support in UML). As I read the current position of the S-M camp it is: if you want 100% code generation, model simulation, and platform independence, then you will have to use the S-M subset and the S-M approach. There is a certain logic in this as a "if you can't beat 'em, join 'em and then beat 'em" approach with the assumption that the benefits of the subset will become self evident. Unfortunately this depends upon (a) keeping the methodology pristine and (b) there won't be too many failures figuring out that the S-M approach is needed in addition to the subset to kill OO entirely. I see UMLers being tempted to, say, use use cases to develop the IM just like they develop class models in UML. As a caveat, I don't think the S-M position is that you cannot be successful with using the general UML. Rather, the position is that it is easier to be successful with S-M because the methodology provides a rigor to enforce a number of good practices while UML depends upon the developer's experience and judgement to provide those practices. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Allen Theobald writes to shlaer-mellor-users: -------------------------------------------------------------------- Going Down In Flames... [snip] > The lesson learned: (Leon, Mike - forgive me from quoting directly > from your slides) > > Ongoing Committment and vision required > Start with a domain chart - and update it > Be careful who leads the modelling effort > Start the architecture early > Control incoming requirements > Even the worst disasters can be repaired (but expert help > costs less BEFORE a crisis) Believe me when I say i'm not missing the point, but... I see this crap (pardon me) happen all the time. There is nothing uniquely S-M about these problems. Why should the methodology take the rap? -Allen David Pedlar writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > Now, having said all this, I think we are perhaps not so far apart in > reality. A bridge has two parts: the side that initiates a message and > the side that accepts the message. In reality, the UI domain might invoke > a wormhole called 'User Requested Go'. The bridge would then translate > this into the message that the Application domain wants to hear, 'Go'. So > it may well be that our differing views are due to our standing on > different ends of the bridge. So the invocation control flow would be ... UI ---User Requested Go---> BRIDGE -----Go-----> Application Presumably we would have some requirement like R1: "When the User Requests Go, start the application." This requirement would have to be imposed on the bridge , since its the only entity that knows about both Users and applications. The bridge would then impose requirements respectively "When the user requests go, call the wormhole called 'User Requested Go'." and "On receipt of the Go message, start the application." on the two domains. Therefore as far as requirement flow is concerned, the bridge would be acting as client(DC) to both the UI and application domains. Indeed if one domain was to impose a requirement directly on the other, one of the domains would be polluted by it. If the domain chart were to show the flow of requirements, it would have to be like- UI <----- bridge -----> application I conclude that the directions of the arrows on typical domain charts are arbitrary and do not actually show a flow of requirements. Does the S & M method say much about requirements? David Pedlar dwp@ftel.co.uk note: client(DC) denotes a client in the context of the Domain chart. Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- Sam Walker wrote: > By not introducing this semantic shift in the context of different > subject matter is really a form of pollution. You can lose > reusability of these service domains by having names/concepts > within non-application domains which are infact specific to the > application. Pollution is related to mission statements. It is possible to have an application-specific service domains. For example: a Train controller application may use a user interface that is specifically for viewing the status of the railway system. An object like "train-view" could exist as a counterpart of the "train" object in the application domain. Of course, an application-specific user interface is likely to require services from more generic domains (Thus "train-view" may also be a counterpart of "icon" from a windowing system). If the view object has interesting properties or behaviour that are not appropriately represented in either the windowing system or the application domain that the application-specific service may be required. It is possible to have application-independent operator interface domains; but these must be specialised through their population files. If this is done in a naive way then the resulting user interfaces will not achieve high user satisfaction ratings. Having an application-specific service does not cause the domains to fail the domain replacement test. It just reduces the number of possible replacements. A different user interface domain could be used; and a different train control system can be used with the same user interface (which may be important: operator confusion can be fatal). Dave. -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Walker... > I never specified whether the user was a test user, or an end > user. I felt that the word 'user' was fairly generic, and so I don't > believe this pollutes this communication at all. I think this is a matter of context for the subject matters. In my world a User is a person with several refinements (e.g., a test program developers vs. a test technician executing the test). This is quite different than a Test Program. We tend to use the words "customer" or "client" to refer to the more generic concept. This is why I see prepending "User Requested" as pollution -- it assumes a particular subject matter context in my world. Regarding standing on different ends of the bridge: > I absolutely agree with this. So as a comment to the original > naming question ... As the communication passes through a bridge > it undergoes a semantic shift. > Although, from what Lahman has stated it does more than just this. > e.g. > Applying this to the example, the User Interface generates 'User > Requested Go', this not only changes to 'Go' but also shifts the > message so that it appears as though it eminated from above the > Application, making the SAP (Service Access Point) virtual. I _think_ I agree with this. Service domains are almost always developed initially in the context of a particular application. Thus the requirements of that particular application will naturally tend to define the service domain's bridge "API". However, if the developer has one eye on domain reuse, an attempt will be made to make the bridges as generic as possible, both for the wormholes that clients invoke and the services' APIs. Similarly, the same can be said of the bridge interface for the highest level application domain if the application can be used in different contexts. [We are forced to do this because our suite of applications are highly layered and designed to be Plug & Play.] Thus I agree that there is a SAP aspect to this. However, I tend to think of it as simply designing the bridge interfaces for reuse -- as opposed to an inherent characteristic of the bridge. I don't need do design the bridges for reuse. The bridge formalism provides a disciplined mechanism for reuse, but the context determines whether the design should incorporate reuse. If I had an application that would always be run by a GUI, I would probably not be terribly bent out of shape by calling the Application domain's synchronous service 'User Requested Go' rather than simply 'Go'. That is, there is nothing in the context to preclude this _unless_ one can envision a realistic reuse situation where there would be no User -- as defined by the locally accepted subject matter contexts B-). -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Neal Welland writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lahman... lahman wrote: > > lahman writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Responding to Dearing... > > > I also find it interesting to note that there seemed to be a marked > > recognition at the SMUG conference that UML is not going to go away > > without a fight and that "resistance is futile". The general message > > seemed to be "if you can't beat 'em, join 'em", and there were some > > interesting presentations about how UML can relate to Shlaer-Mellor. > > > > My own personal opinion was that there was a healthy competitive tone > > between the SM and UML camps, and that there seemed to be a readiness to > > take on board some of the useful concepts of UML. I felt that one or two > > of the presentations were ever so slightly over-biased against UML > > though. I felt that one or two of the claims suggesting that, for > > instance, using UML will inherently tie your architecture design into > > your application design in a way that prevents you from separating the > > two, were a little unfair. > > FWIW, my perception of the state of UML vs. S-M is that the underlying > methodologies are still quite different. This is especially true now that > schools associated with UML have developed that define objects in terms of > functionality rather than data (e.g., an object is a collection of > responsibilities). I suspect this stems from using use cases directly as an > _initial_ design mechanism (i.e., to develop the class models), resulting in > Actors, Controllers, etc. objects This can also be seen at the level of > state models: in S-M we tend to think in terms of event communication with a > queue manager in the architecture while in the UML world FSM transitions are > usually implemented directly as action function calls. > Interesting. Are you suggesting that use cases should not be used at all for a S-M development? Kennedy Carter recently released a version of their toolset which now supports "UML Use Case and Sequence Diagrams". I acknowledge that KC are S-M with some notable extensions, but the logic still prevails. Have KC committed a crime in your eyes? or are you simply suggesting that the transition from use case to IM needs a little more thought? -- @@@@@@@@@ @@ ~ ~ @@ ( * * ) ============================-oOOo-(_)-oOOo-============================= Neal A. Welland GPT Ltd. Phone : +44 1203 562197 New Century Park Fax : +44 1203 562826 PO Box 53 Email : wellanna@cvsf305.gpt.co.uk Coventry. CV3 1HJ. Oooo. England ============================-.oooO-=-( )-============================= !!!!!!!!!!!!!!!!!!!!!!!!!! ( ) ) / !!USE FIXED FONT TO VIEW!! \ ( (_/ !!!!!!!!!!!!!!!!!!!!!!!!!! \_) "Stephen R. Tockey" writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > FWIW, my perception of the state of UML vs. S-M is that the underlying > methodologies are still quite different. It may be a bit more precise to say that the UML is a *language* for modeling OO systems. The UML, as defined by the OMG, does not say anything about *process*. While the language in a S-M model is at least similar to the language in a UML model, Lahman's perception is certainly true when one considers the process(es) that most UML modelers use. > On the notation front, S-M's notation can be expressed as a subset of UML It's not clear to me, really, as to who's is a subset of whose. I'd tend to characterise it as there is a fair amount of intersection between the languages but that there's a non-trivial amount in each that is not cleanly expressable in the other. For example, S-M doesn't have use cases. OTOH, UML does not provide a consistent way of specifying the semantics of actions (beyond using their Object Constraint Language, OCL, to state pre-conditions and post-conditions). > ... As I read the current > position of the S-M camp it is: if you want 100% code generation, model > simulation, and platform independence, then you will have to use the S-M > subset and the S-M approach. I should mention that Steve Mellor and I have been actively working in the OMG (NB: while UML was developed by Rational and others, it's "marketing clout" comes from the fact that it was adopted by the OMG. It's the OMG that "owns" UML now, not Rational) to extend UML with "precise action semantics". Among other things, UML with precise action semantics would allow the use of the S-M process with UML as the modeling language. For more info on this, see http://www.projtech.com/pubs/uml98.html If all goes according to the current plan, the action semantics for UML would be adopted by the OMG around January, 2000 (preliminary drafts should be available before then). A long-term goal is to be able to use the S-M approach with a suitably modified version of UML, and that this would be the best of both worlds for those concerned. -- steve lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Dearing... I am working from an abstract w/o the original context since I wasn't at ESMUG. But lack of facts has never stopped me before... > The team first ran into troubles when they partitioned the domains and > very quickly territorialised these domains, making the domain boundaries > very inflexible and rendering communication between teams more difficult > than it should have been. The next problem was that they were modelling > complex paramaters and rules in a service domain, thereby neglecting the > actual application domain. They could have easily modelled simple > objects and behaviour in the application domain instead. > Ongoing Committment and vision required > Start with a domain chart - and update it We have not had communication problems with inflexible domain boundaries per se because one of the first things we do is define the bridges for the DC in detail -- as a team effort. This requires good mission statements for the domains. But once everyone is on the same page concerning the domain abstractions and requirements flows, the communication tends to go OK. However, a problem that we have experienced that seems quite similar is committing too much to a single domain. This was manifested by inordinately complex state machines, unrealistic objects (e.g., when you have 10**8 Pin States, it would be lunacy to create an instance for each one), and general nervousness about complexity. In this situation one has to take another look at the levels of abstraction for the domains -- perhaps this is what Mike and Leon meant by "vision". In so doing you may have to move objects from service domains to clients, but we try to avoid this because it changes the scope of the client domain and, as a result, changes that domain's design. Our usual approach is to create a new domain at a different level of abstraction or move some details into the implementation domains as realized code. For example, our Pin State became a realized bitmap class with associated functions that magically appeared as transforms, accessors, etc. in the ADFD and the "object" only appeared in the IM as an attribute (i.e., a handle). There is another issue here, though. The recommended procedure is to develop from the top of the DC downwards so that developers are only working on domains at the same level in the DC. This tends to eliminate the communication problems because development does not start on a level until the clients are completed, so there is no confusion about what the clients want. In practice, we find this impractical because of time-to-market requirements, so we have to work on all domains in parallel. The only countermeasure I see for this is for everyone to spend extra time on the DC to make sure that the level of abstraction for each domain is well defined. If this is done, then it should not be difficult to at least rough in the bridge descriptions at the synchronous service argument level to everyone's satisfaction before developing the domain's internals. It will also help prevent the need to move modeling upwards on the DC later on. If a domain becomes too complex more lower level domains can be added without affecting the clients or even other domains at the same level -- essentially this remains a personal problem for the original domain developers. At worst, some client bridges have to be re-routed in the DC to new domains. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com "Stephen R. Tockey" writes to shlaer-mellor-users: -------------------------------------------------------------------- Allen Theobald wrote: > Believe me when I say i'm not missing the point, but... I see this > crap (pardon me) happen all the time. There is nothing uniquely S-M > about these problems. Why should the methodology take the rap? Allen, I agree with your observation, but... When a project team tries something new, such as S-M, they say "Hey World, we're doing S-M". They don't say, as a matter of course, "Hey World, we're also doing (or *not* as the case may be) fundamental software project management". Note: fundamental software project management = planning, project tracking & oversight, configuration and change management, personnel education, ... Clearly, whether a project does or does not do fundamental software project management has a great impact on the ability of the software project to be successful. More so, IMHO, than the impact of things like S-M. But when the team only advertises that they're doing S-M then the success or failure of the project is automatically correlated with S-M, not with whether or not they did fundamental software project management. An S-M project that fails for project management reasons will be blamed on S-M, not on project mis-management. OTOH, an S-M project that avoided failure through fundamental software project management will be seen as a S-M success, not as a project management success. To me, the "Going Down in Flames" stuff is really just a bunch of fundamental software project management stuff stated in different words. I think Steve & Sally would be the first to agree that S-M has little chance of success without fundamental software project management along with it. But I think they also treat this as a "It goes without saying ..." issue in that they think it should be painfully obvious to even the casual observer that no technical approach (S-M, Booch, Rumbaugh, UML, Objectory, SA/SD, ...) will work without fundamental software project management. The disconnect happens when people mistakenly try S-M without realizing that fundamental software project management is also important. The failure of these projects is blamed on S-M, not on project mis-management. Sigh. -- steve Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- David Pedlar wrote: > UI ---User Requested Go---> BRIDGE -----Go-----> Application > > Presumably we would have some requirement like > R1: "When the User Requests Go, start the application." > > This requirement would have to be imposed on the bridge , since its the > only entity that knows about both Users and applications. > [reasoning deleted] > If the domain chart were to show the flow of requirements, it would have > to be like- > > UI <----- bridge -----> application > > I conclude that the directions of the arrows on typical domain charts > are arbitrary and do not actually show a flow of requirements. This depends on how you read the requirements; and how you word the mission statements of your domains. The mission statement shold allow you to extract from the requirements those concepts that are grounded in the domain; and those which are delegated to others. All requirements are fed to the top level domain(s); the domain itself produces the requirements spec for its bridges; and they provide the reuqirements for the service domains; etc. If we assume that the domain chart contains only one top level domain (the application); which has a mission statement that includes the application but excludes the user interface; then your requirement would be interpreted as "start the application when (the user requests go)" The application domain will obviously take responsibility for the application; but it will note that the "go" is controlled by something that is not the application. It will therefore need a wormhole propogate the requirement. The wormhole will be impose a requirement on the bridge that says "R1(*): When the user requests go, invoke this service". The bridge then passes this on the service domain. Of course, you could choose to put the user interface at the top of your domain chart. Then the user interface would see the requirement before the application. However, most people don't do that (Don't ask me to justify this; someone else can do that). > Does the S & M method say much about requirements? No. Dave. -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > If I had an application that would always be run by a GUI, I would > probably not be terribly bent out of shape by calling the Application > domain's synchronous service 'User Requested Go' rather than simply > 'Go'. That is, there is nothing in the context to preclude this _unless_ > one can envision a realistic reuse situation where there would be no User > -- as defined by the locally accepted subject matter contexts B-). I would probably not include the "user requested" bit in the name; basically because it doesn't really add anything. If you had a lot of services which could be mistaken for "go" then a better name would be needed ("user requested go" might be one possiblity). I find it more interesting to look at the other side of the name: the "go" bit. Suppose we have a fairly generic operator interface that has the general capability of starting activities. It may have an output service named, "Request start: Activity ". The application may have several input services, for example: "begin defragmentation", "open terminal", etc. The generic "request start activity" service may be mapped to these services These service names seem to stand by themselves. They don't need to be prefixed with an indication of who invokes them This would only be necessary if, for some strange reason that I can't think of at the moment, the services were partitioned into indented usage classes and there was the possibility for someone in six months time getting confused. To use an unnecessary prefix could cause confusion in someone who thinks "this domain isn't a user: can it use the service?" More interesting (IMO) is the polymorphism of the interface. A bit of text mangling and an appropriate class structure would allow the bridge to be implemented as simple inheritance. I'd probably put the base class in the bridge; the subclasses in the client and an association to the server(s) that start services. However, if the server is unique then I could put the base class in the server and optimise out the bridge. Implementing the translator for this is an exercise left to the reader :-). Dave. -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. Daniel Dearing writes to shlaer-mellor-users: -------------------------------------------------------------------- Allen, I thoroughly agree, and I don't think anyone was necessarily suggesting that it was unique to SM developments, but if that's the way it came across in my much shortened summary of the proceedings, I un-reservedly apologise. The truth is, any idiot can run a project badly, it doesn't take SM training (or UML for that matter) to make a project fail, but it doesn't do any harm to remind people of the pitfalls of software development from time to time, especially when they spend a lot of their time focussing on technical issues of the modelling. Regards, Danny :-) >>> Allen Theobald 28/09/98 10:02:10 >>> Allen Theobald writes to shlaer-mellor-users: -------------------------------------------------------------------- Going Down In Flames... [snip] > The lesson learned: (Leon, Mike - forgive me from quoting directly > from your slides) > > Ongoing Committment and vision required > Start with a domain chart - and update it > Be careful who leads the modelling effort > Start the architecture early > Control incoming requirements > Even the worst disasters can be repaired (but expert help > costs less BEFORE a crisis) Believe me when I say i'm not missing the point, but... I see this crap (pardon me) happen all the time. There is nothing uniquely S-M about these problems. Why should the methodology take the rap? -Allen "Dana Simonson" writes to shlaer-mellor-users: -------------------------------------------------------------------- This is probobly obvious, but... In the larger scheme, usually a client is associated with the UI service domain... Customer is a client of a Bank. Bank has a service domain known as "Account Manager" who gets requirements from Customer. "Account Manager" is a client of "Banking Application". "Banking Application" has a service domain "User Interface" to get requirements from "Account Manager." Just wanted to make sure that "User Interface" was not confused with "User." User says "Go", User Interface says "User said Go." <<<>>><<<>>><<<>>><<<>>><<<>>><<<>>><<<>>><<<>>> Dana Simonson Engineering Section Manager Transcrypt Operations - Waseca dsimonson@transcrypt.com www.transcrypt.com Allen Theobald writes to shlaer-mellor-users: -------------------------------------------------------------------- I have a information modeling question. I have a 'file'. The file is created from outside the system. The user will dynamically 'parse' the file; by which I mean the user will 'create' his own 'records' within the file (assume he does the error free--whatever that means). Each 'record' is assigned exactly one 'record type'. Each 'record' has 0 or 1 'types' to be associated with that 'record'. 'Parsing' is a seperate step from 'associating the type' So, after 'parsing' I would have an object ('record') resembling (ignoring table syntax): -------------------|---------------| | Ptr to file name | 'c:\file.dat' |---> 1 of these per file -------------------|---------------| | begin pos | 0 |\ -------------------|---------------| \ | end pos | 10 | \ -------------------|---------------| \ | type | -none- | \ -------------------|---------------| -> 1..N of these per file | begin pos | 11 | / -------------------|---------------| / | end pos | 20 | / -------------------|---------------| / | type | -none- | -----------------------------------| And after associating a function the object would resemble: -------------------|---------------| | Ptr to file name | 'c:\file.dat' |---> 1 of these per file -------------------|---------------| | begin pos | 0 |\ -------------------|---------------| \ | end pos | 10 | \ -------------------|---------------| \ | type | TypeA | \ -------------------|---------------| -> 1..N of these per file | begin pos | 11 | / -------------------|---------------| / | end pos | 20 | / -------------------|---------------| / | type | TypeB | -----------------------------------| I guess the IM looks something like: c record <<---------------> parser * id * id o parser id (R) o file id o begin o end o record type id (R) ^ | | V c record type * id Comments? Would the 'create this record begining here... ending here...' event be sent to 'parser' or 'record'? Does it matter? Would the 'assign type... to record...' event be sent to 'record' or 'record type' I love playing around with this stuff!! Kind Regards, Allen "Dana Simonson" writes to shlaer-mellor-users: -------------------------------------------------------------------- Sugestion: Change parser to file. c record <<---------------> file * id * id o parser id (R) o file id o begin o end o record type id (R) ^ | | V c record type * id Send the 'create this record begining here... ending here...' event to the file (he 'owns' the records). Send the 'assign type... to record...' event to 'record' (he holds the referential attribute that needs to be set when the relationship is formed). <<<>>><<<>>><<<>>><<<>>><<<>>><<<>>><<<>>><<<>>> Dana Simonson Engineering Section Manager Transcrypt Operations - Waseca dsimonson@transcrypt.com www.transcrypt.com Sam Walker writes to shlaer-mellor-users: -------------------------------------------------------------------- __________________________ Sam Walker Software Engineer Advanced Technology Division Tait Electronics Ltd Phone (64) (03) 358 6683 Fax (64) (03) 358 0432 >>> Dave Whipp 29/September/1998 04:36am >>> Dave Whipp writes to >I would probably not include the "user requested" bit in the > name; basically because it doesn't really add anything. If you > had a lot of services which could be mistaken for "go" then a > better name would be needed ("user requested go" might be > one possiblity). I think you are missing my point. 'User Requested Go' is an indication, where as 'Go' is a command (or request). If it is not generic enough for every possible example call it 'Go Requested' or whatever. From the user interfaces point of view this communication is an indication. However, as was pointed out previously, the bridge changes this indication into a command which appears as though it came directly from the user, which is called 'Go'. Sam lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Pedlar... > So the invocation control flow would be ... > > UI ---User Requested Go---> BRIDGE -----Go-----> Application > > Presumably we would have some requirement like > R1: "When the User Requests Go, start the application." True, but probably more general (see below). > This requirement would have to be imposed on the bridge , since its the > only > entity that knows about both Users and applications. > The bridge would then impose requirements respectively > "When the user requests go, call the wormhole called 'User Requested > Go'." > and > "On receipt of the Go message, start the application." > on the two domains. > Therefore as far as requirement flow is concerned, > the bridge would be acting as client(DC) to both the UI and > application domains. > Indeed if one domain was to impose a requirement directly on the other, > one > of the domains would be polluted by it. > > If the domain chart were to show the flow of requirements, it would have > to be like- > > UI <----- bridge -----> application > > I conclude that the directions of the arrows on typical domain charts > are > arbitrary and do not actually show a flow of requirements. I think this is mixing communication with requirements flow. I suspect the requirement is something like, "the user must be able to start the execution interactively". At the functional specification level, that requirement gets massaged into something like. "the user will be able to start the execution by clicking a button control labeled 'Go' in the XXX screen of a GUI". Then preliminary design comes up with a domain chart that provides for one or more User Interface domains as services to the main application. The main Application domain has a requirement to start the execution when the user requests it. The design chooses to acquire these requests through a service domain, the UI. Thus the Application passes a requirement to the UI domain that it must acquire the request to start execution. At this point the Application has delegated the responsibility for satisfying the requirements concerned with providing a screen XXX with a control button labeled "Go". These requirements have passed from the Application domain to the UI domain and a one-way bridge arrow is drawn to reflect this. However, the UI domain still has to communicate with the Application domain when the user does click the button on the XXX screen (i.e., when the user request is acquired). That is, the Application also passed the requirement that it be notified when the request is acquired. The way the UI does this is by sending the Application a message (request) when the button has been clicked. This has nothing to do with the flow of requirements -- it is simply the communication context that instantiates the acquisition. There are no requirements "on the bridge" -- the bridge arrow in the DC merely indicates the flow of requirements. Moreover, the bridge arrow on the DC is an entirely different critter from the communication API mechanisms that are used by the architecture to implement the communication. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Welland... Regarding the disadvantages of use cases: > Interesting. Are you suggesting that use cases should not be used at all > for a S-M development? Not so! We use them but only after the IM has been developed. They are very handy for delegating the functionality to objects, verifying relationships, and guiding incremental implementation. (Not to mention building test scenarios.) > Kennedy Carter recently released a version of their toolset which now > supports "UML Use Case and Sequence Diagrams". I acknowledge that KC are > S-M with some notable extensions, but the logic still prevails. Have KC > committed a crime in your eyes? or are you simply suggesting that the > transition from use case to IM needs a little more thought? Though we use IOOA, we haven't gotten into the new toolset yet, so I can't really address that particular issue. The only problem I have with use cases is when they are used to identify the objects in the system. When they are used that way you tend to start getting objects that are essentially derived by functional decomposition rather than the more direct identification of fundamental problem space entities. There is a reason that it is called the Information Model. B-) -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Tockey... > > FWIW, my perception of the state of UML vs. S-M is that the underlying > > methodologies are still quite different. > > It may be a bit more precise to say that the UML is a *language* for > modeling OO systems. The UML, as defined by the OMG, does not say > anything about *process*. While the language in a S-M model is at least > similar to the language in a UML model, Lahman's perception is certainly > true when one considers the process(es) that most UML modelers use. True, but there is a family of methodologies, approaches, etc. that are associated with UML and, so far as I can tell, they are relatively similar among themselves but they are quite different than S-M. > > On the notation front, S-M's notation can be expressed as a subset of UML > > It's not clear to me, really, as to who's is a subset of whose. I'd tend > to characterise it as there is a fair amount of intersection between the > languages but that there's a non-trivial amount in each that is not cleanly > expressable in the other. For example, S-M doesn't have use cases. OTOH, UML > does not provide a consistent way of specifying the semantics of actions > (beyond using their Object Constraint Language, OCL, to state pre-conditions > and post-conditions). True. I made the statement based upon the idea that there's a whole lot of stuff in UML that isn't in S-M and very little in the S-M notation that isn't in UML, other than action semantics. I didn't include the action semantics issue in my thinking because UML simply doesn't address that issue. But I agree it's kind of hard to justify that view when talking of subsets. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > I would probably not include the "user requested" bit in the name; > basically because it doesn't really add anything. If you had a > lot of services which could be mistaken for "go" then a better > name would be needed ("user requested go" might be one possiblity). I basically agree. I just wouldn't go into Obstinate Mode if someone insisted on doing this in a application with a customized GUI. > More interesting (IMO) is the polymorphism of the interface. > A bit of text mangling and an appropriate class structure would > allow the bridge to be implemented as simple inheritance. I'd > probably put the base class in the bridge; the subclasses in the > client and an association to the server(s) that start services. > However, if the server is unique then I could put the base class > in the server and optimise out the bridge. Implementing the > translator for this is an exercise left to the reader :-). Another spin is with the rather common interface for test systems dating from before GPIB. The requests for tests come in four pieces: Setup, to initialize the system for a particular instrument; Initialize, to initialize that instrument; Measure, to have the instrument do its thing; and Fetch to get the results. The Measure is typically generic because it doesn't depend upon the instrument, its initialization, or the data to recover -- essentially an inheritable Go. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Theobald... It has been a long day, so I may have simply densed out but I have some confusions... > I have a information modeling question. > > I have a 'file'. The file is created from outside the system. The > user will dynamically 'parse' the file; by which I mean the user will > 'create' his own 'records' within the file (assume he does the error > free--whatever that means). Each 'record' is assigned exactly one > 'record type'. Each 'record' has 0 or 1 'types' to be > associated with that 'record'. If the record has exactly one types, why does the record in the diagram have two types? > 'Parsing' is a seperate step from 'associating the type' Is the application doing the parsing on a user defined file? If so, are the types filled in? Or is the file empty and the application is supposed to fill in the records by creating them? > So, after 'parsing' I would have an object ('record') > resembling (ignoring table syntax): > > -------------------|---------------| > | Ptr to file name | 'c:\file.dat' |---> 1 of these per file > -------------------|---------------| > | begin pos | 0 |\ > -------------------|---------------| \ > | end pos | 10 | \ > -------------------|---------------| \ > | type | -none- | \ > -------------------|---------------| -> 1..N of these per file > | begin pos | 11 | / > -------------------|---------------| / > | end pos | 20 | / > -------------------|---------------| / > | type | -none- | > -----------------------------------| > > And after associating a function the object would resemble: I don't grok the "associating a function" bit. What function? How is a type associated with a record? Where do the types come from? > I guess the IM looks something like: > > c > record <<---------------> parser > * id * id > o parser id (R) o file id > o begin > o end > o record type id (R) > ^ > | > | > V c > record > type > * id > > Comments? Would the 'create this record begining here... ending > here...' event be sent to 'parser' or 'record'? Does it matter? Would > the 'assign type... to record...' event be sent to 'record' or 'record > type' These aren't IM questions, they are state model questions. Or are you asking how the functionality would be apportioned in the IM to the state machines? Despite my total lack of understanding, let me speculate... "Parser" seems like it should be named "File" if external events events like Create Record are being provided and it would be the logical recipient of such events. Also, "record type" looks very suspicious to me because it has no data other than the ID. Where the Assign Type event goes would depend a lot on where the types come from and what the rules for assigning them are. Based upon past experience, though, I would be surprised if there is more than one active object. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Neal Welland writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lahman... >lahman wrote: > > lahman writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Responding to Welland... > > Regarding the disadvantages of use cases: > > > Interesting. Are you suggesting that use cases should not be used at all > > for a S-M development? > > Not so! We use them but only after the IM has been developed. They are very > handy for delegating the functionality to objects, verifying relationships, and > guiding incremental implementation. (Not to mention building test scenarios.) > I'll confess that we haven't looked in depth at the new I-OOA toolset yet, but it would seem that they are using the use cases to drive the whole process. Use cases are scoped to a project version and are a pre-cursor to domain modelling. Each use case sequence diagram identifies (and validates) the interactions between identified domains. I find it hard to understand the value of producing a use case after the IM has been developed. Its almost like writing code before you have a design. Use cases define behavioural system requirements from the perspective of the user. An IM represents an abstract analysis of the entities within a domain and define the initial requirements fulfilled by that domain. Surely it is more logical to understand the requirements on the system and then allocate them to a suitable domain, rather than work from an IM and determine what your system requirements are? > > Kennedy Carter recently released a version of their toolset which now > > supports "UML Use Case and Sequence Diagrams". I acknowledge that KC are > > S-M with some notable extensions, but the logic still prevails. Have KC > > committed a crime in your eyes? or are you simply suggesting that the > > transition from use case to IM needs a little more thought? > > Though we use IOOA, we haven't gotten into the new toolset yet, so I can't really > address that particular issue. The only problem I have with use cases is when > they are used to identify the objects in the system. When they are used that way > you tend to start getting objects that are essentially derived by functional > decomposition rather than the more direct identification of fundamental problem > space entities. There is a reason that it is called the Information Model. B-) > I agree with your assertion regarding the identification of objects from use cases, but that should not prevent their use within S-M, prior to producing an IM. KC seem to have used the use case paradigm to drive the analysis of the domain chart. How they prescribe their use during IM, I am not sure. Perhaps somebody from KC would care to add their own perspective on this? > -- > H. S. Lahman There is nothing wrong with me that > Teradyne/ATD could not be cured by a capful of Drano > 179 Lincoln St. L51 > Boston, MA 02111-2473 > (Tel) (617)-422-3842 > (Fax) (617)-422-3100 > lahman@atb.teradyne.com -- @@@@@@@@@ @@ ~ ~ @@ ( * * ) ============================-oOOo-(_)-oOOo-============================= Neal A. Welland GPT Ltd. Phone : +44 1203 562197 New Century Park Fax : +44 1203 562826 PO Box 53 Email : wellanna@cvsf305.gpt.co.uk Coventry. CV3 1HJ. Oooo. England ============================-.oooO-=-( )-============================= !!!!!!!!!!!!!!!!!!!!!!!!!! ( ) ) / !!USE FIXED FONT TO VIEW!! \ ( (_/ !!!!!!!!!!!!!!!!!!!!!!!!!! \_) Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > However, a problem that we have experienced that seems quite similar is > committing too much to a single domain. This was manifested by inordinately > complex state machines, unrealistic objects (e.g., when you have 10**8 Pin > States, it would be lunacy to create an instance for each one), > [...] > In so doing you may have to move objects from service domains to clients, > but we try to avoid this because it changes the scope of the client domain > and, as a result, changes that domain's design. Our usual approach is to > create a new domain at a different level of abstraction or move some details > into the implementation domains as realized code. For example, our Pin > State became a realized bitmap class with associated functions that > magically appeared as transforms, accessors, etc. in the ADFD and the > "object" only appeared in the IM as an attribute (i.e., a handle). This seems to me to be a pollution of the IM with design considerations. Originally you thought you had an object (pin_state), which presumably had an identifier and a small number of attributes with very restricted attribute domains (your example suggests 1 attribute with 2 possible values; however, the general solution doesn't require anything quite so constrained) Having realised that you would have a large number of instances (10^8); you decided that it was no longer a valid object. This seems a bit strange; so perhaps I've misunderstood something here. Anyway, in the situation I described, it may be necessary to find a very efficient architectural mechanism for storing the object. The first thing to determine is whether it is necessary to store the identifier. If it is arbitrary; or constructed from one or more attributes whose attribute domains are well defined (finite) then it isn't. This is normally the case so I won't worry about storing the identifier. So, in the architecture we can identify instances of the object with an index. (If the identifier is non-arbitrary then it is simple to map an integer index onto members from finite sets). We can use the index into one or more bit vectors. If all the attributes of the object have only two possible values then we can use 1 bit-vector per attribute. [in general, you need N bit-vectors per attribute where N = log2(card(attribute_domain)) rounded up; of course, you can be clever with the packing and use a non-integer number of bits per instance] This gives a very memory-efficient storage of instances of the instances without polluting the IM. The more general problem of simplifying a complex domain can be handled by a combination of service domains and architectural enhancements. In my SMUG'97 presentation I suggested that the former breaks apart a domain into 2 domains where the complexity of the two combined is the sum of their individual complexities; whereas the translative approach enables combined complexity to be the product of the individual complexities. There was no rigorous mathematical justification - it is just a first order empirical approximation. The basic evidence is that the construction of bridges to a service domain generally requires point-to-point mappings between the models; whereas the architectural approach uses bridges from a point in the server domain's model to a point in the client's meta-model. The meta-model applies to the whole client model, so the effect of the mapping is distributed across the whole model (or, to reverse the viewpoint: a factorisation applies to the whole model, not just to a specific element). Thus factoring into architectural domains is a much more powerful approach; but only when the problem allows. In the above example, if only one object in the domain has properties that use the mapping then obviously the mapping is not distributed across the whole domain. But having put the mapping in place, any other objects that meet the criteria are mapped with no additional effort. Dave. -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. Chris Raistrick writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Welland... >Neal Welland writes to shlaer-mellor-users: >-------------------------------------------------------------------- >I agree with your assertion regarding the identification of objects from >use cases, but that should not prevent their use within S-M, prior to >producing an IM. KC seem to have used the use case paradigm to drive the >analysis of the domain chart. How they prescribe their use during IM, I >am not sure. > >Perhaps somebody from KC would care to add their own perspective on >this? Use Case Analysis can fruitfully be deployed at either the system level, showing interactions between domains, or at the domain level, showing how objects or instances interact. For what they are worth, here are my views on each of these. USE CASES AT THE SYSTEM LEVEL ============================== I believe that the basic sequence of tasks for ³system level² Use Case Analysis on an OOA project is: 1. BUILD THE USE CASE MODEL ------------------------------------------------- This requires identification of the Actors and Use Cases in the usual way (i.e. the way described in the popular books). Each Use Case is described in terms of its basic and alternate courses, and pre- and post-conditions. Performance requirements can be captured for each Use Case at this time. Commonality can be captured by ³uses² relationships. These will give a clue to the required service domains later. 2. BUILD THE DOMAIN CHART --------------------------------------------- Use Cases provide an agenda of issues to be dealt with by various domains: a. Since each Use Case represents a function from the users¹ perspective, they will, for the most part, correspond to application domain functionality. They can therefore be used as a basis for the application domain mission statement; b. The ³used² Use Cases often correspond to the ³generic² service domains, such as ²logging², ³alarms² and ³algorithmic services²; c. Some actors represent roles played by people, which will be supported by the appropriate ³user interface² and ³user authorisation² domains; d. Some actors represent hardware components, which implies the need for ³hardware interface² domains, such as ³Process Input/Output² for direct interfaces, or ³Communications² for asynchronous interfaces. 3. BUILD THE SEQUENCE DIAGRAMS -------------------------------------------------------- A Sequence Diagram is built for each Use Case, showing the interactions between the domains on the domain chart. This serves to raise confidence in the chosen domains, and helps to identify missing capabilities that need to be included as additional domains. 4. DEFINE THE DOMAIN INTERFACES --------------------------------------------------------- This happens as a by-product of the previous step. Each interaction on the Sequence Diagram represents a ³bridge mapping², which can be implemented using bridge services or wormholes or whatever your chosen method version requires. IMHO, it is vital to establish these domain interfaces early, and manage them as they change. It provides a clear demonstration that the system design represented by the domain chart is viable, and helps avoid serious integration problems later. 5a. BUILD THE DOMAINS ------------------------------------- Easy! 5b. DEFINE MULTI-DOMAIN SIMULATION SCENARIOS --------------------------------------------------------------------------- ---- To a naturally lazy analyst such as myself, one of the biggest benefits of Use Case driven development is that the Use Cases and associated Sequence Diagrams kill several birds with one stone. They provide a system view and raise confidence early on, and form the basis for analysis level and target level testing. The Use Case definitions, particularly the pre-conditions, form a great basis for specification of the multi-domain tests to be applied to the model using your chosen OOA simulator. 6. PERFORM MULTI-DOMAIN TESTING -------------------------------------------------------- The Sequence Diagrams for each Use Case, along with the Use Case post-conditions, effectively define the expected results for each test. These can be compared with the actual interactions that occurred, as presented by your chosen OOA simulator. USE CASES AT THE DOMAIN LEVEL ============================== As far as the deployment of Use Cases within a domain is concerned, I believe that many analysts already do this in an informal way when constructing the preliminary OCM before building the state models. The process is comparable to that used for analysis of domain interactions. It involves homing in on that part of a Use Case thread that impinges upon the domain under study. The interactions can now be shown at the object (or even the instance) level. In the application domain, a terminator represents an external entity. This may be a role performed by a person, or an external system. Happily, this sounds like the definition of an actor. So little imagination is required here - create one terminator for each actor. The services provided by each terminator will correspond to a subset of the services invoked on the Sequence Diagram boundary lines. Other terminators will represent services provided by other domains (often the capabilities captured in the ³used² Use Cases). The services provided by these terminators will correspond to the interactions invoked by this domain¹s lifeline on the corresponding Construction of domain-level Sequence Diagrams showing object interactions serves the same purpose as building a Sequence Diagram showing domain interactions: 1. It raises the analysts¹ confidence that their objects are capable of co-operating to deliver the required behaviour for that Use Case 2. It establishes the set of services (synchronous and asynchronous) that each object must provide. CONCLUSION ============ I believe that Use Case analysis has a contribution to make both at the system level and the domain level. However, my experience tells me that the primary technical risks present themselves at the ³system design² level, pertaining to the domain chart and bridges. I would therefore suggest that it is here that the greatest return on effort invested can be achieved. *********************************************************************** Chris Raistrick tel : +44 1483 483200 Kennedy Carter Ltd fax : +44 1483 483201 14 The Pines web : http://www.kc.com Broad Street email : chris@kc.com Guildford GU3 3BH UK "We may not be Rational but we are Intelligent" ************************************************************************ lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Welland... > I find it hard to understand the value of producing a use case after the > IM has been developed. Its almost like writing code before you have a > design. Use cases define behavioural system requirements from the > perspective of the user. An IM represents an abstract analysis of the > entities within a domain and define the initial requirements fulfilled > by that domain. Defining an IM for a domain is about abstracting the information in the problem space and it is a high level, static description of the subject matter. One of S-M's critics described the IM as a "data model on steroids", which I happen to regard as complimentary. I am of the Old School of OO that believes objects are defined by encapsulating data and the processes that operate upon that data. That is, the data comes first. By their nature uses cases describe the dynamics of the system. If you use them to define the IM, then you are essentially using functional decomposition to identify objects. You get objects like Managers and Controllers that have lots of functionality and little or no data. This view has been formalized in the hypermodern schools of OO, such as Responsibility Modeling. In that view objects are defined by encapsulating functionality and the data that supports that functionality. [Recently on OTUG this has been taken to the extreme of arguing that objects should _never_ have accessors -- the only way to obtain information from an object is via responses to requests for actions and data is not part of the public object abstraction.] In fairness, Jacobsen separates the use case model from the software model so that one could have a data driven model derived from use cases. The problem with this, in my view, is that it is very tricky to do. It is just too tempting to define pure functional objects. The fact that those gurus who are actively using use cases from scratch are coming up with approaches like Responsibility Modeling demonstrates this, IMO. I believe the intent of S-M is to use the traditional data-driven view of OO. Therefore the IM should be treated as a refinement of the Domain Chart where the static description is provided from the information in the problem space at the domain's level of abstraction. Once you have that static description, then you still have the problem of delegating the responsibilities to those data-derived objects (i.e., to define the processes that operate on the data). For doing this use cases are very handy -- in fact, it is what they do best. FWIW, I think the best use of use cases in S-M is to support incremental development. We use them to guide the design and implementation of complete features. We tend to have highly aggressive schedules, fixed dates, and fixed resources, so to get a release shipped we may have to cut features. If you build the features individually this becomes much easier to do -- you just stop building the current feature! Use cases provide the road map to determine the minimum that needs to be implemented. BTW, I have no objection to using use cases concurrently with developing the IM. In practice it is not always possible to develop the IM without considering the functionality. For example, the placement of an attribute in one object vs. another may depend upon the way the domain's functionality is partitioned. My objection is simply to using use cases as the primary mechanism for identifying objects. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > This seems to me to be a pollution of the IM with design considerations. > Originally you thought you had an object (pin_state), which presumably > had an identifier and a small number of attributes with very restricted > attribute domains (your example suggests 1 attribute with 2 possible > values; however, the general solution doesn't require anything quite so > constrained) > > Having realised that you would have a large number of instances (10^8); > you decided that it was no longer a valid object. This seems a bit > strange; so perhaps I've misunderstood something here. Certainly this is a valid concern because of the way I described it. Our rationalization was that having an object with 10**8 instances was a symptom of having an incorrect level of abstraction for the domain. I could argue that the number of instances is a problem space characteristic that the analyst needs to be aware of to perform coloration, but that is kind of thin. But we confirmed by asking ourselves whether the domain actually cared what the value of a particular pin's state in a particular pattern was. The answer was No -- no decision in the domain affecting flow of control or attribute values depended upon the value of any Pin State instance's data. This confirmed that the level of abstraction was too detailed for the domain. I would argue that one could have asked this question without knowing that there would be 10**8 instances and one would have arrived at the same conclusion. However, I think this opens up two old wounds. The first is a variation on the question of whether an OOA can be implementation independent. In this case the question is: Can one always determine the correct level of abstraction for a domain without considering the implementation? I have a niggling fear that the answer is No, but I have nothing close to a persuasive example. The other issue is whether one cares. Suppose the domain did care about a Pin State instances' values so that I did need the object with 10**8 instances. You offered a skeleton approach for developing an acceptable implementation during the translation and elegant argument for making more use of architectural solutions. I believe there is no question that one can do this. But I can argue that this is not worth the effort in some cases for two reasons. Doing such things in the translation is tricky and requires nontrivial development effort to make the translator smart enough to know when and what to do. Even if it is a one-time investment, I question whether it is worthwhile if I happen to already have a library with a Bitmap class. With the existing Bitmap class I can use my solution and the only architectural support needed is mapping the coloration tags to the Bitmap methods. The second reason, more important in my view, is that I get very nervous when the fundamental structure of the generated code starts to look very different from what I did in the models. The problem is that I debug from the models and it starts to get very annoying when I cannot easily figure out where I am in the code relative to the models. The underlying root cause of both of these problems is that the spartan OOA notation is too far removed from the computing environment. In particular, there is no way to directly indicate in the OOA that I am reusing a realized model. If the Pin State had been at the correct level of abstraction, I suspect that I would still have done the same thing. I would opt for implementation pollution because I don't see the penalty exceeding the benefit. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com David Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- I've had some thoughts wandering round my mind for a while. They've been slowly chrystalizing around various problems I've been having with SM when compared to more traditional OOD approaches. Basically, many of the concepts in OOD lead to models that are more maintainable that an SM-OOA model, even though the OOD approach uses elaboration. The basic problem is that all data access navigations are coded in ADFDs. This means that objects may need to access non-local objects. This makes the models brittle. Anyway, my thought patterns have coalesced onto a simple concept that seems to solve many of the problems without creating too many new ones. I have dutifully transcribed the idea into text in the hope of soliciting some feedback. Entangled Attributes Introduction A complete SM model consists of an information model, objects may have state models and states have ADFDs. The presence of the ADFDs can severely hamper the modification of the information model because the actions for one state may navigate several relationships, including those that may be quite distant from the object containing the ADFD. This brittleness can be identified by the use of the Object Access Model (OAM). This much undervalued view is the synchronous counterpart of the OCM. When an object accesses many other objects, this shows up as tangled clusters on the OAM. As a general rule, I like to be able to draw diagrams in 2 dimensions with a minimum number of crossed lines. Some people have found the need to extend the method to allow object-based synchronous services. These are shown on the OCM (not the OAM!) and are a recognition of the problem of non-local data access. By packaging the access within a synchronous service on a distant object, the modeller can eliminate many non-local navigations and accesses. In OOA96, the concept of the mathematically dependent attribute was introduced. This allows information to be represented on the OIM that is not in "normal form". The ability to put such attributes in the OIM allows the modeller reduce some non-local accesses; but not with the same power as the synchronous service. A few months ago, PT clarified that the (M) attribute must be updated by the model itself; there is no magic in the metamodel that takes advantage of the derived nature of the attribute. To summarise the situation so far: . Process models make the OIM brittle . OAMs are underused and very messy . A vendor-extension added synchronous services - but to the OCM . mathematically dependent attributes might be a solution but . they don't have any power in the meta model Why don't (M) attributes work? Why didn't PT define automagic updating of (M) attributes. Basically its because that would cause a lot of problems. Some of these are conceptual. People ask questions like "when does the update occur?" and "Who performs it". Such concerns can largely be dismissed because a definitive answer is always possible. An appropriate mindset will provide answers to conceptual problems. A much more serious problem is the question of whether an automagic update would exclude write accessors from the derived attribute. In some cases, where the relationship between the attributes is uni-directional, this doesn't matter; but there are some cases which can't be dismissed. For example, given mass, volume and density attributes, which one do you mark as derived. Such a decision is, at best, arbitrary. Entangled Attributes: The Solution? I have an aversion to object based synchronous services. They allow analysis detail to be swept under the carpet. I much prefer the idea of derived attributes. Unfortunately they are broken so I cast around for alternative ideas. The world of Quantum mechanics has the concept of entangled pairs of particles. Mess with one; and the other is affected immediately, even if it is at the other side of the galaxy. Neither one is in control: the relationship is symmetric (No quibbling from physists please). Thus I started thinking that the relationship between mass, volume and density is similarly entangled. The term has the advantage that is it not currently used in other OO methods (a rarity indeed). What if I was able to declare that a set of attributes is entangled on the OIM. Automagic update is a natural consequence of the concept; and the is no problem of deciding which one is derived: they all are; and none of them are. The Details How would this work in practice? Lets go back to the basics. An attribute at a piece of information that is visible on the information model. It cannot be accessed directly; accessor processes are used to set and get the value; Whether or not the attribute is actually stored in the object is defered to the architecture. In the simple case, each attribute has a "read" and a "write" accessor process Now consider the next level of complexity: the unidirectional dependence. An example of this might be prices on items in a shop. A carton of orange juice has a price; but the price applies to all the cartons, not the individual instance. Thus, in a normalised information model, the orange juice carton would not have a price - only its specification object would. There would be a 1:M relationship between the price specification and the actual shelf-item. In the current method, an ADFD wanting to know the price of the orange juice would need an additional navigation to the price specification object. If many ADFDs do this, then there would be a tangled mess on the OAM because both objects are accessed by many others. The solution is to add a price attribute to the shelf-item and entangle it with the price attribute on its specification object. In this case, the specification attribute would have a read accessor and a write accessor; the shelf item would have only a read accessor. The attribute definition (currently an informal attribute description) would have to define the mathematical relationship between the two; and also define the allowed (or disallowed) write accessors. Now we come to the traditionally problematic case of the symmetric entanglement. Continuing with the mass/volume/density example, it is possible that write accessors would be needed for all three. The question then becomes: "if I set the volume, does this change the mass or the density?" The answer to this is quite obvious: it depends what you are doing to the attribute. Volume can be changed in two ways. you can set the quantity of stuff; or you can set its compression. Attributes have types; and types define what operations are valid. The volume attribute has two write accessors; and neither can be thought of as "just setting the value". The units of both accessors would be units of volume. A similar argument would hold for the write accessors for mass and density. You might change the mass by adding some neutrons to the atomic nuclii (which would preserve the volume) or you might just add more stuff. In both cases, you would say what the resulting mass is; but be manner of adding the mass must be defined by the write accessor. Philosophical benefits I claimed earlier that synchonous services allow important analysis detail to be hidden away. Why did I claim this; and why doesn't my proposal suffer the same problem? The purpose of the OOA is to explicity bring out the details of the problem. Information on the OIM and behaviour in the state models. Complexity in the ADFD represents real complexity in the subject matter. Both Synchronous services and this proposal are based on the observation that this isn't true. Some of the complexity in the ADFDs is due to the modelling formalism. The sychronous servie solution is to hide the complexity that was introduced. By encasulating a set of navigations and transforms you can dramatically simlify the state actions. I calim this is only an illusion. The complexity is still there; its just been hidden away. This breaks the OOA philosophy of exposing complexity. (you can use synchronous services to hide real complexity and it isn't always obvious). My solution takes a different approach. It asks the question "why has the method introduced complexity; how can that be avoided?" At a first glance, it appears that the complexity is a result of having to fetch the data; then modifying it; then putting it back. This definitely is the balk of many ADFDs. However, this is only a symptom, not the cause. I beleive that the real cause is a failure to recognise that the information model provides only a viewpoint on the information. An attribute is just a window onto the concept that it represents. The crucial thing that is missing is the concept that a write accessor doesn't set the value of an attribute: what it really does is modifies the underlying thing that is represented. The inclusion of the entangled attribute really intoduces a slightly more subtle concept. It requires the analyst to explicity think about the meaning of the write accessors for the attribute. Does "set volume" mean "compress the liquid" or does it mean "get rid of some liquid"? These questions are not currently explored in the current method. The failure to explicity explore and represent these issues within the method requires the modeller to explicity model the consequences within the model. It is this explicit modelling of an underlying concept that causes the pervasive complexity. Synchronous services treat the symptoms; entangled attributes treat the cause. Problems The concept of multiple write acessors does cause a problem for some action languages (notably SMALL, which uses a '>' as its symbol for a write accessor). This is not a problem for the ADFD approach; and there is no standard action language for SM yet. So I'll ignore this problem. There is also the problem of "when does the update occur?" If entangled attributes are on different nodes of a distributed system then we cannot guarantee that update will be simultaneous. We do not have access to the "spooky action at a distance" that entanged photons possess. Time rules would have to be defined. A last problem, also introduced with SMALL (but also by CASE vendors) occurs when we ask: "What happens of an entangled attribute is referential" In the old days, where relationships were defined by the data, this wouldn't be an issue; but if relationships are linked with explicit link/unlink operators it becomes an issue. I've always disliked the explicit form; but its seems quite pervasive. Perhaps another messey complication to the method would be needed to say "referential attributes can't be entangled". The remaining problem is that of object creation (and deletion). if the attributes are on objects that are created independently then the created values must be consistant. That would be a rule in the method. So, is there any merit in the idea? Thats why I'm posting it. I'll be interested to see what people's reaction is. I would intend the concept to replace both (M) attributes and object-based synchronous services. I would also expect it to simplify the OAM and avoid many of the problems associated with changing the OIM once you've finished the ADFDs. The problems are generally associated with the more recent (i.e. not in the books nor OOA96) parts of the method. It doesn't break anything in the core method. What do people think? Dave. -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. "Dana Simonson" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to: >>> David Whipp 09/30/98 07:02AM >>> >...The question then becomes: "if I set the volume, >does this change the mass or the density?" The answer may need to be determined at runtime. Use the case of a simple amoritization. -Initial Balance -Annual Interest Rate -Payment -Number of pay periods per year -Length of Repayment Changing one of these can affect any of the others. If I want a loan of $1,000,000 and need to repay it in 5 years, then I can choose the combination of the other attributes to make this happen. If I change the Length of repayment to 4 years, there is no way to determine which of the other attributes to change. This is usually driven by the user. If I read any of the attributes, I can calculate the value based on the other parameters. I wrote a program many years ago that did this. The user selected the desired output. This was then calculated when any of the others changed. To do this with dependent attributes, each would need to be able to have a logic set behind it. Add the attribute -Current requested calculation The dependency for writing to Payment would then be switch (Current requested calculation) { case Initial Balance: Initial Balance(Annual Interest Rate,Payment,Number of pay periods per year,Length of Repayment) case Annual Interest Rate: Annual Interest Rate(Initial Balance,Payment,Number of pay periods per year,Length of Repayment) case Number of pay periods per year: Number of pay periods per year(Initial Balance,Annual Interest Rate,Payment,Length of Repayment) case Length of Repayment: Length of Repayment(Annual Interest Rate,Payment,Number of pay periods per year,Initial Balance) } Since this is a relatively simple application, I would think that to handle the general case, the only way to specify the dependencies would be to have the accessors invoke an SDFD for the attribute. That sounds a lot like making attributes objects which have relationships to other objects... Should we allow an information model for each object that defines the relationships between attributes? <<<>>><<<>>><<<>>><<<>>><<<>>><<<>>><<<>>><<<>>> Dana Simonson Engineering Section Manager Transcrypt Operations - Waseca dsimonson@transcrypt.com www.transcrypt.com "Lynch, Chris D. SDX" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp: It seems like the purpose of your idea is to reduce redundant navigation and accesses in the ADFD's by an extension of the idea of the derived attribute, which you call an "entangled" attribute. A clarifying question: if I do not normally use derived attributes and do not use "object-based synchronous services", would my object access models and ADFD's benefit? By my understanding of your proposal, the answer would seem to be, "no". Regards, -Chris ------------------------------------------- Chris Lynch Abbott Ambulatory Infusion Systems San Diego, Ca LYNCHCD@HPD.ABBOTT.COM ------------------------------------------- Allen Theobald writes to shlaer-mellor-users: -------------------------------------------------------------------- Hello! "Each object that has a state model is documented by providing...": 1. A STD 2. A STT 3. Descriptions of actions not on the STD (cause they couldn't fit) My question is...I have these three items; do I need to *write* anything, or do the diagrams/tables speak for themselves? Kind Regards, Allen "Lynch, Chris D. SDX" writes to shlaer-mellor-users: -------------------------------------------------------------------- Dave Whipp wrote: ... >A few months ago, PT clarified that the (M) attribute >must be updated by the model itself; ... There was a big ESMUG discussion about update of mathematically dependent attributes [ (M) ] awhile back, and my impression was that it is the responsibility of the analyst to state the requirements for how up-to-date the derived attribute must be (for purposes of fetching it) and the responsibility of the __architect__ to supply this "freshness" via appropriate design (updating in the read accessor, etc.) Explicit update of (M) attributes by the model seems like architectural pollution and IMHO violates the spirit of the "(M)" Can anyone confirm and/or give a rationale for PT's reported statement? Thanks, -Chris ------------------------------------------- Chris Lynch Abbott Ambulatory Infusion Systems San Diego, Ca LYNCHCD@HPD.ABBOTT.COM ------------------------------------------- lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Theobald... > "Each object that has a state model is documented by providing...": > > 1. A STD > 2. A STT > 3. Descriptions of actions not on the STD (cause they couldn't fit) > > My question is...I have these three items; do I need to *write* > anything, or > do the diagrams/tables speak for themselves? They can speak for themselves in the sense that they rigorously and unambiguously communicate _how_ the FSM works. However, I think one can argue that more information is required. For example, as an innocent bystander I might want to know _why_ a particular STT entry is IGNORE. Typically this information is provided in the IM descriptions of relationships, attributes, and objects. If it isn't, then I would argue that Good Practice demands that it be recorded somewhere. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... I basically agree with Lynch and Simonson, so I'll just wait for your replies to them. But I have to have something to go on about, so... > I've had some thoughts wandering round my mind for a while. > They've been slowly chrystalizing around various problems I've > been having with SM when compared to more traditional OOD > approaches. Basically, many of the concepts in OOD lead to > models that are more maintainable that an SM-OOA model, even > though the OOD approach uses elaboration. The basic problem > is that all data access navigations are coded in ADFDs. This > means that objects may need to access non-local objects. > This makes the models brittle. I am not sure I buy your basic premise that access of non-local objects makes the SMOOA model brittle. I thought that this was the main point of limiting all data accesses to accessors. The accessor abstracts all of the annoying architectural details of maintaining data integrity. It might give the Architect nightmares, but I don't see where the OOA itself is brittle. Or are the Architect's problems in dealing with the SMOOA what you see as the brittleness? -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com "Leslie Munday" writes to shlaer-mellor-users: -------------------------------------------------------------------- Allen, Depends on who your audience is. If a requirements document which needs confirmation from a customer, you = certainly need to explain what's going on in each action, each state and = the scope of each object and each domain. (That's assuming that they're = familiar with the S-M method, else you'll have to document that too.) If a design document that is being used to generate code, no additional = documentation may be necessary, but I'd still comment every action, = state, object and domain. If the document is aimed towards a tester, then additional documentation = on data and timings is appropriate. So the answer is possibly yes, but more commonly no. Leslie. Allen Theobald writes to shlaer-mellor-users: -------------------------------------------------------------------- Hello! "Each object that has a state model is documented by providing...": 1. A STD 2. A STT 3. Descriptions of actions not on the STD (cause they couldn't fit) My question is...I have these three items; do I need to *write* anything, or do the diagrams/tables speak for themselves? Kind Regards, Allen 'archive.9810' -- Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- I'm replying to all three comments in this single post. I've also added a lot of additional detail to my original proposal. It should clarify some questions (and also demonstrate that it does actually work). lahman wrote: > I am not sure I buy your basic premise that access of non-local objects > makes the SMOOA model brittle. I thought that this was the main point of > limiting all data accesses to accessors. The accessor abstracts all of > the annoying architectural details of maintaining data integrity. It > might give the Architect nightmares, but I don't see where the OOA itself > is brittle. Or are the Architect's problems in dealing with the SMOOA > what you see as the brittleness? What I have observed is that its more difficult to change a complete model than a model with just an OIM. This is hardly suprising; but is does point to the fact that the OIM does have a dependency on the state models and state actions. It is dependencies that cause problems when you want to change the model. My desire is to minimise that dependency. It isn't non-local accesses themselves that cause the problems; its the relationship navigations. By embedding navigations in the ADFD you are creating a dependency between the OIM relationships and that ADFD. Thus a change to that relationship must be made in more than one place. If the navigation appears in multiple ADFDs; and especially if it is non-local; then it can require significant work. Lynch, Chris D. > It seems like the purpose of your idea is to reduce redundant > navigation and accesses in the ADFD's by an extension of the idea > of the derived attribute, which you call an "entangled" attribute. Correct. Though I won't get hung up on the name. (It was meant to indicate that I can support loops of dependency). > A clarifying question: if I do not normally use derived attributes > and do not use "object-based synchronous services", would my object > access models and ADFD's benefit? By my understanding of your > proposal, the answer would seem to be, "no". I do not use them either (because the former are broken and the latter seem like a bad idea). Your reasoning would imply that I wouldn't benefit either. If all your state actions are simple; and you cannot identify dependencies between the OIM and ADFDs then you probably won't benefit. If you've ever had to make a simple change to the OIM; and found yourself editing more than a small number ADFDs (or their equivalent) as a consequence then you probably could. Dana Simonson wrote: > The answer may need to be determined at runtime No problem. This just means that the calculation is dependent on (entangled with) the discriminator. I think I should probably explain the mechanism. After all, my initial post was slightly vague. I just said that attributes could be interdependent; and that you had to recognise the concept that was expressed by a write accessor. My mechansim is a good old fashioned DFD (not ADFD). There are a few differences: first, a write accessor is used to sensitise paths in the graph; secondly, the nodes are mapped to attributes (which are associated with a derivation function); thirdly, data flows are associated with a relationship; and inherit cardinality from that relationship. The reason for claiming that the DFD formalism introduces less dependency than the ADFD is twofold. Primarily, there is only one DFD per domain (though it could probably be partitioned into unconnected fragments). Secondly, the intoduction of the DFD allows a clean separation of the datapath and control path in the model. Some people might say that this is non-OO (whatever that means); but experience with hardware suggests that this separation leads to cleaner models. The DFD requires the addition of four objects to the OOA-of-OOA (three if you assume that a derivation concept already exists for (M) attributes). Using UML notation (and omitting the boxes round classes) the relevant fragment is: 0,1 1 1..* * derivation------------attribute---------------modification_concept |1 from|1 modifies : to| | : | | : | 1..* |* * * : +--------------Dataflow------------write_accesor |* guards: provides path| : | : |0,1 : relationship_role guard My appologies for the ASCII art (make sure your mail reader uses a non-proportional font); and for the necessarily breif, and incomplete, relationship phrases. (I am not sure whether the "guards" relationship should relate the dataflow the the write_accessor or the modification_concept. Both seem to work. The way I've shown it is definitely OK; but it may be overspecified. Refer to the end of this post for clarification) Its probably impossible to see how it works from just an OIM fragment; so a couple of examples my be useful. 1. The price of movies from a video store. For the background to this example, refer to Martin Fowler's refactoring paper at http://www2.awl.com/cseng/titles/0-201-89542-0/vidrefact/vidrefact.html (I've simplified it a bit) Basically, the price of a tape to be rented is defined by the moview thats recorded on it. The cost of a rental depends on the duration of the rental; and the cost to the customer is the sum of the rentals. The DFD is: * movie_price-------->tape_price---+ (R1) | |(R2) | v duration_of_rental--->multiply=>cost_of_rental |* |(R3) | v sum_from_zero=>amount_owed (Again, the notation is restricted by ASCII art). In this example, there are no guards on the dataflows. The calculations just flow. Note how the cardinality of the dataflow can be shown on the flows; along with the relationship that is navigated to get from one attribute to the next (the flow cardinality is inherited from the relationship). However, because the duration and cost of the rental are both on the same object, no relationship needs to be navigated. The final "sum" process is called "sum_from_zero" to indicate that if there are no rentals than the amount owed is zero. 2: The old mass/density/volume problem. The previous example just shows how the current (M) attributes could be formalised to make them work. The real extention is to allow the flows to be guarded. [quantity]>volume [composition]>density +----------->volume*density=>mass<----------+ | | | | | | |[quantity]>mass |[composition]>mass v v mass/density=>volume<--------------------------->mass/volume=>density [pressure]>density [pressure]>volume In the diagram, the double headed arrows are a shorthand for the fact that there's a flow in each direction. I've also neglected to include any relationship or cardinality information. Such details would add nothing to the example. Here, each of the 6 flows is quarded by a qualified write accessor. For eample, the guard in the top left of the diagram says that if the write accessor for "volume" is invoked, under the concept of "quantity" then the change to the volume can flow down the path from volume to mass. Thus, if the set the volume to change the quantity then the mass is changed but the density isn't (because no other guards are activated. If we invoke the write accessor for "volume" under the concept of "pressure" then the bottom right guard is activated; so the density is changed but the mass isn't. One thing you can see on the diagram is that each bidirectional flow represents a single concept with two sifferent attribute's write accessors. This is the reason for my ealier comment that it may not be necessary to include the attribute name in the guard expression; the concept may be sufficient. It would probably also be nice to find a way to specify the the formulae just once for all three rearrangements. However, this probably isn't necessary. The analyst should be quite capable of describing the derivation from three perspectives. However, I'd probably replace the explicit formulae with a name (e.g of a transform process). Any non trivial derivation (such as Dana's example) would horribly clutter the diagram. Dave. -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- Dana Simonson wrote: > Responding to: >>> David Whipp 09/30/98 07:02AM >>> > > >...The question then becomes: "if I set the volume, > >does this change the mass or the density?" > The answer may need to be determined at runtime. > > Use the case of a simple amoritization. > -Initial Balance > -Annual Interest Rate > -Payment > -Number of pay periods per year > -Length of Repayment > > Changing one of these can affect any of the others. > > If I want a loan of $1,000,000 and need to repay it in 5 years, then I can choose >the combination of the other attributes to make this happen. If I change the > Length of repayment to 4 years, there is no way to determine which of the > other attributes to change. This is usually driven by the user. If I read > any of the attributes, I can calculate the value based on the other parameters. I appologise: my previous reply missed your point. I have used the idea of a "modification concept" wich is specificed as part of the write accessor (or which augments it - depends on your viewpoint). It is this "modification concept" that would determine which of the other attributes are effected. I am assuming that the list of possible modification_concepts would be determined by the analyst. One simple way to implement your requirement would be to have a test process to determine what the user wants to do; and to use the result to decide which write accessor to use. So, when your set: !x : ~new_length [change number of pays per year] > account.length the associated dataflow specification would know which other attributes to change. (I think that the notation I've used maintains the spirit of SMALL) Dave. -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > It isn't non-local accesses themselves that cause the problems; its > the relationship navigations. By embedding navigations in the ADFD > you are creating a dependency between the OIM relationships and > that ADFD. Thus a change to that relationship must be made in more > than one place. If the navigation appears in multiple ADFDs; and > especially if it is non-local; then it can require significant work. Ah, now my karma has properly adjusted. I agree that this tends to be a problem. I am not bothered by it on a practical level as much as you seem to be -- but a problem nonetheless. > If all your state actions are simple; and you cannot identify > dependencies between the OIM and ADFDs then you probably won't > benefit. If you've ever had to make a simple change to the OIM; > and found yourself editing more than a small number ADFDs (or > their equivalent) as a consequence then you probably could. I agree here too -- this is almost always a problem during any sort of maintenance. > My mechansim is a good old fashioned DFD (not ADFD). My impression thus far is that this is an elegant and interesting proposal. However, I still have some questions just to make sure I understand it properly. I assume you are proposing this notation for all attribute access, not just for "entangled" attributes(?) (It probably wouldn't do much to resolve your original problem of eliminating explicit relationship navigations for typical applications otherwise.) Would you still use the ADFD to handle tests, transforms, and event generation? If not, where would these fit into the DFD scheme? Obviously, you can stick processes into the DFD as easily as an ADFD but this leaves two related problems. What is left in the action? How does one keep track of which actions are doing what if there is a single domain DFD? If so, what is the link between the ADFD and the DFD? My hang-up here is that if the ADFD needs to dump some attribute data into a transform, how does one describe which data to use with sufficient detail that the appropriate track on the DFD can be followed without explicitly defining the navigation? -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com "Dana Simonson" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Dave Whipp In Small: !x : ~new_length [change number of pays per year] > account.length In the A/SDFD world: ???? One of the requirements of small was that everything could be also represented in the graphical form. Where would you propose putting the change method for the attributes? Should there be the possibility of a SDFD for read and one for write of each attribute. (On write do this, on read do that) If so, are there any special rules to enforce? Can a change method send events? Can a change method write to other attributes who have their own change methods? A version of the component model, which treats variables as 'properties' with read and write methods (and value defaults), might be a nice extension to SM. It could provide a clean interface to allow the analyst to define the M relationship without cluttering the process models which just want to use the value. It could also be used to define the unit conversions or other actions which have been described on this forum as being placed in special accessors. This would allow for a more generic translator. <<<>>><<<>>><<<>>><<<>>><<<>>><<<>>><<<>>><<<>>> Dana Simonson Engineering Section Manager Transcrypt Operations - Waseca dsimonson@transcrypt.com www.transcrypt.com "Lynch, Chris D. SDX" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp: ... >What I have observed is that its more difficult to change a complete >model than a model with just an OIM. This is hardly suprising; but is >does point to the fact that the OIM does have a dependency on the >state models and state actions. It is dependencies that cause problems >when you want to change the model. My desire is to minimise that >dependency. >It isn't non-local accesses themselves that cause the problems; its >the relationship navigations. By embedding navigations in the ADFD >you are creating a dependency between the OIM relationships and >that ADFD. Thus a change to that relationship must be made in more >than one place. If the navigation appears in multiple ADFDs; and >especially if it is non-local; then it can require significant work. ... It seems to me that the general solution to this problem is, as one ESMUGger alluded to, identifying and collecting recurring ADFD fragments into named processes. The result of this would be that there would be _one_ place to change the traversal: A --R1--> B --R2--> C Is it within the intent of your proposal to allow this sort of thing? I, for one, would be completely in favor of such a concept. Regards, -Chris ------------------------------------------- Chris Lynch Abbott Ambulatory Infusion Systems San Diego, Ca LYNCHCD@HPD.ABBOTT.COM ------------------------------------------- Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > My impression thus far is that this is an elegant and interesting proposal. > However, I still have some questions just to make sure I understand it properly. > > I assume you are proposing this notation for all attribute access, not just for > "entangled" attributes(?) (It probably wouldn't do much to resolve your > original problem of eliminating explicit relationship navigations for typical > applications otherwise.) > > Would you still use the ADFD to handle tests, transforms, and event > generation? If not, where would these fit into the DFD scheme? Obviously, > you can stick processes into the DFD as easily as an ADFD but this leaves two > related problems. What is left in the action? How does one keep track of which > actions are doing what if there is a single domain DFD? > > If so, what is the link between the ADFD and the DFD? My hang-up here is that > if the ADFD needs to dump some attribute data into a transform, how does one > describe which data to use with sufficient detail that the appropriate track on > the DFD can be followed without explicitly defining the navigation? Its obviously slightly difficult for me to understand precisely what your concerns are - you probably aren't entirely sure yourself. So I'll write some comments which will hopefully overlap your thought processes sufficiently to clarify your concerns. It is possible (probable) that some attributes would not appear in the domain's DFD. These would be those that are not derived; nor used in any derivations. (They could be placed on the DFD; but they would have no associated data flows - like putting a static object on the OCM). This special case is identical to the current situation. ADFDs would still exist, as they do at the moment. Their role would be primarily to specify the control flow in the application. Its is the ADFD that contains the tests, event generation and read/write accessors. I have a feeling that synchronous wormholes would be useful in both ADFDs and the DFD (for the derivation function) It is possible that ADFDs will still contain a small number of transform processes; however, I feel that the use of derived attributes to bring the transformed data to the object is probably a better approach in most cases. The DFD is not attached to any specific object. In principle there is just one per subsystem. In practice it may be better to partition it into its unconnected graphs and give each partition a meaningful name. The execution semantics of the DFD are not the same as for an ADFD. An ADFD process only activtes when all its inputs are active. A DFD process would activate when the "change ripple" (the update starts at the attribute that was written by a write accessor and then spreads through the DFD) has reached all of its sensitised inputs. (There would have to be appropriate locks if the implementation is not single threaded). The change-ripple can only travel down sensitised paths: sensitised paths are those which either have no guards defined; or for which one of its guards is matched by the write accessor. Note that the implementation may be able to optimise much of this. My description of the semantics is based on a "how to move the counters when you simulate by hand" approach. The semantics of a write accessor (which can only be used from within an ADFD or SDFD) must also be clarified. Firstly, if an attribute has no change_concepts (see OIM-of-OOA in my previous post) then it cannot have any write accesors (it is read-only). If it has exactly one change_concept (perhaps a better term could be found) then the write accessor sensitises the network with that concept. If there is more than one change concept for the attribute then the ASL must specify which concept should be used by the write accessor to sensitise the DFD (see example 2 in previous post). Your last point concerned how to keep track of the data without specifying the navigations explictly. Well, the navigations are specified. The DFD data flows are associated with relationships to show how the data spreads through the model. So the question must relate to pushing the data into the start of the pipe: defining the instance for the write accessor. Well, if you are writing to a remote instance then you'll still need to navigate to it (though you may be able to define a local entry point into the DFD). However, it is my experience that a lot more navigation is repeated while attempting to read data than for writing it. If you disagree, then I'll have to think of a deeper justification; for the moment, its one of those things that seems intuitively obvious. Hopefully this has clarified things a bit; or at least made it more obvious what it is that needs clarifying (or further thought). Perhaps you have a simple domain laying round that you could spend a few minutes with to see of you can spot any data paths. See how much ASL (or even how many states) you can get rid of. I find that a lot of effort is spent in state models just moving data around. Dave. -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... I'm getting closer, but that's quite relative so bear with me... > It is possible (probable) that some attributes would not appear in > the domain's DFD. These would be those that are not derived; nor > used in any derivations. (They could be placed on the DFD; but they > would have no associated data flows - like putting a static object > on the OCM). This special case is identical to the current situation. In my experience the number of attributes that are derived is a tiny fraction of all attributes in the domain; we have a number of domains with no derived attributes at all. As I read this, the vast majority of attributes would remain in the ADFD. If so, those long chains of annoying relationship traversals would still be making the ADFD brittle. But in a couple of paragraphs it seems to me you are using a different definition of "derived". > ADFDs would still exist, as they do at the moment. Their role would be > primarily to specify the control flow in the application. Its is the > ADFD that contains the tests, event generation and read/write accessors. > I have a feeling that synchronous wormholes would be useful in both > ADFDs and the DFD (for the derivation function) If the read/write accessors are still in the ADFD, they have to get their instance IDs for non-local instances somewhere. That requires relationship navigation. > It is possible that ADFDs will still contain a small number of transform > processes; however, I feel that the use of derived attributes to > bring the transformed data to the object is probably a better approach > in most cases. You seem to feel that most transforms that calculate attribute values can be eliminated from the ADFD by placing them in the DFD. In effect this would convert the attributes to derived attributes. It seems to me that this is the only way to get a lot of attribute writes into the DFD. The problem I have with this is that most of our transforms have more complicated processing requiring multiple inputs. Do you see this just going into one of the DFD processes? > The execution semantics of the DFD are not the same as for an ADFD. > An ADFD process only activtes when all its inputs are active. A DFD > process would activate when the "change ripple" (the update starts at > the attribute that was written by a write accessor and then spreads > through the DFD) has reached all of its sensitised inputs. (There > would have to be appropriate locks if the implementation is not > single threaded). The change-ripple can only travel down sensitised > paths: sensitised paths are those which either have no guards defined; > or for which one of its guards is matched by the write accessor. I understand that the execution semantics are necessarily different. However, the normal mindset when dealing with the simultaneous view of time is that an action provides the boundary that the architecture has to deal with for maintaining consistent data and relationships. My concern is how this boundary is mapped into the change-ripple. Would a change-ripple _always_ be kicked off and finished within an action scope (e.g., within a write accessor)? I would assume that it would have to be. > The semantics of a write accessor (which can only be used from > within an ADFD or SDFD) must also be clarified. Firstly, if > an attribute has no change_concepts (see OIM-of-OOA in my previous > post) then it cannot have any write accesors (it is read-only). > If it has exactly one change_concept (perhaps a better term could > be found) then the write accessor sensitises the network with that > concept. If there is more than one change concept for the attribute > then the ASL must specify which concept should be used by the write > accessor to sensitise the DFD (see example 2 in previous post). My concern is with the simplest case: something like "write 23 to attribute X of all the As connected to me via R64". As I read this paragraph, this would have exactly one change_concept. So far so good. But what is being sensitized in the network? How does the invoking action find the right instance of A to write? Is this done on the DFD with something like * parameter_value -----------> [X] R64 so that X_write (23) just primes the DFD change-ripple with parameter_value = 23? -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- Dana Simonson wrote: > > "Dana Simonson" writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Responding to Dave Whipp > > In Small: > > !x : ~new_length [change number of pays per year] > account.length > > In the A/SDFD world: ???? Yes, those are the only places where SMALL is used. The only change to SMALL would be to add the change-concept qualification to write accessors. > One of the requirements of small was that everything could be also > represented in the graphical form. Where would you propose putting > the change method for the attributes? The change to ADFDs would be equivalent. The only change would be to add the change-concept into the write-accessor bubble. > Should there be the possibility > of a SDFD for read and one for write of each attribute. (On write do > this, on read do that) If so, are there any special rules to enforce? I'm sorry, you'll have to rephrase the question. I'm not sure I understand it. As far as I am concerned, the only change to SDFDs would be that write accessors would be qualified where necessary. > Can a change method send events? Can a change method write to other > attributes who have their own change methods? No. You should think of it as a transform process (or possibly a synchronous wormhole). It cannot write to other attributes - its a transform with exactly one output (though, if you can come up with a good reason to change the proposal, it could be considered) If you refer to my OOA-of-OOA fragment, you'll see that the relevent relationships are: "dataflow flows from exactly one attribute" "an attribute sources 0 or more dataflows" "a dataflow flows to exactly one derivation function" "a derivation function recieves one or more dataflows" "an attribute has zero or one derivation function" "a derivation function calculates the value of exactly one attribute" I know I didn't put the full verb phrases on the diagram: ASCII art is somewhat limiting. Hopefully this clarifies the situation. > A version of the component model, which treats variables as > 'properties' with read and write methods (and value defaults), > might be a nice extension to SM. That is already the way I think of attributes. If I translate to c++ then an attribute is mapped to a query and a modifier. I'm basically proposing that the modifier can be a bit more than just a "set-method" > It could provide a clean interface to allow the analyst to define > the M relationship without cluttering the process models which just > want to use the value. I think thats a good summary of how I would like to use the extension (though I wouldn't use the word "interface"). When I first looked at (M) attributes, I thought that they could be used in this way; it was only later that I realised that the lack of rigour, plus the mutual dependence problem, would prevent this usage. > It could also be used to define the unit conversions or other > actions which have been described on this forum as being placed > in special accessors. This would allow for a more generic translator. Possibly. They purpose of the dataflow diagram is to allow data to be delivered to an object in a form that makes sense for that object (and therefore its ADFDs). Dave. -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- Lynch, Chris D. SDX wrote: > > "Lynch, Chris D. SDX" writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Responding to Whipp: > It seems to me that the general solution to this problem is, > as one ESMUGger alluded to, identifying and collecting > recurring ADFD fragments into named processes. The result of this > would be that there would be _one_ place to change the traversal: > > A --R1--> B --R2--> C > > Is it within the intent of your proposal to allow this sort of > thing? I, for one, would be completely in favor of such a concept. The intent is to simplify ADFDs. The effect is to move many such traversals into a single diagram (which is a sort-of yes). I do not like the idea of just grouping the repeated fragments into a cohesive functions (or whatever you call them) because it introduces a deeper hierarchy into the models (The current 3 levels are causing enough problems). In my first SM trining course, when the lack of encapsulation and hierarchy in the OIM seemed strange, it was explained that the SM philosophy was to expose everything; and it was felt that the earlier (SA) hierarchical DFDs were just about impossible to use. I completely agreed (and agree) with this analysis. My proposal is intended to allow for the simplification of ADFDs (thus making it easier to change the state models and OIM) whilst maintaining the flat (non-hierarchical) nature of an SM model. Dave. -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > I'm getting closer, but that's quite relative so bear with me... I have plenty of patience. I've been thinking about this (on and off) since before the big (M) attribute debate on this list a few months back. My mind probably makes all sorts of assumptions that I don't realise. Defending an idea is the only way to make sure *I* understand what I'm talking about. > In my experience the number of attributes that are derived is a tiny > fraction of all attributes in the domain; we have a number of domains > with no derived attributes at all. Snap. My proposal implies changing this. > As I read this, the vast majority of attributes would remain in the > ADFD. Attributes do not appear in ADFDs: only their accessors. This contrasts with the DFD where the attributes are actually part of the diagram. > If so, those long chains of annoying relationship traversals > would still be making the ADFD brittle. But in a couple of paragraphs > it seems to me you are using a different definition of "derived". The structure of an ADFD is frequently of the form "navigate the model to find some data. Apply transforms if necessary. Now use the data..." It is this input stage that I am attempting to eliminate. I envisage that all the data that an ADFD needs would be available as (derived) attributes on the object containig the ADFD. They DFD defines how the information gets from its source to the object where it is used; and would include the transforms. Thus the new structure of an ADFD would be "get data from 'this'; now use it ..." If you find you have output chains such as "navigate to find the object I want to write to ... now write the data" then this could also be moved into the DFD (but this may not be clean model from an analysis point of view). However, such non-local writes are often accompanied by event generation; so the navigation would still be needed to find the destination of the event. > If the read/write accessors are still in the ADFD, they have to get their > instance IDs for non-local instances somewhere. That requires relationship > navigation. I envisage that most reads would be come local; and that most transforms would be moved to the DFD. non-local writes will still require navigation. > You seem to feel that most transforms that calculate attribute values > can be eliminated from the ADFD by placing them in the DFD. In effect > this would convert the attributes to derived attributes. It seems to > me that this is the only way to get a lot of attribute writes into the > DFD. Correct. An almost perfect summary. > The problem I have with this is that most of our transforms have more > complicated processing requiring multiple inputs. Do you see this just > going into one of the DFD processes? The number of inputs to the transform is not really an issue - a transfrom on a DFD can have multiple inputs. If you require multiple transforms then my proposal, as it stants, would require you to create intermediate derived attributes (which is also required to move data along multiple relationships). > I understand that the execution semantics are necessarily different. > However, the normal mindset when dealing with the simultaneous view > of time is that an action provides the boundary that the architecture > has to deal with for maintaining consistent data and relationships. > My concern is how this boundary is mapped into the change-ripple. > Would a change-ripple _always_ be kicked off and finished within an > action scope (e.g., within a write accessor)? I would assume that it > would have to be. That would be a simplistic implementation. However, the boundary for ending the update is that the data must be available when someone want's to use it; or when someone wants to start a new update ripple that would interact with the first. It can also be noted that some of the update can be moved into a read access. If the path to a derived attribute is not quarded then there is no need to store the values along the path. The read accessor can, in the implementation, trace it back to the nearest stored attribute. Finally, if you consider the mass, volume, density triagle, it is clear that only 2 of the three values needs to be stored. An implementation may be able to make an intelligent choice of which. > My concern is with the simplest case: something like "write 23 to > attribute X of all the As connected to me via R64". As I read this > paragraph, this would have exactly one change_concept. So far so good. > But what is being sensitized in the network? How does the invoking > action find the right instance of A to write? Is this done on the > DFD with something like The first question is: what is "23". Is it meaningful to the local object. If it is, then you can create a meaningful local attribute, say "the_value". Your write accessor then writes to this. i.e. in SMALL: 23 > this.the_value. The DFD can then transport this value to the "X" attributes on A. This, of course, makes "X" derived: * the_value---------->X R64 (remember, "the_value" is the local attribute; X is the derived attribute). But, later, you realsise that object A also wants to write to its attribute X. It can write: 42 > this.X But now there is a problem. "the_value" is driving 23; and "A.X" is driving 42. There is a conflict. We either need to turn off the "23" driver from "the_value" or we need to update "the_value" with 42 from X. The first option requires a guard on the DFD. [concept]>the_value * the_value---------------------------->X R64 Which says that this link in the DFD is only active when you write to the_value under [concept] (If there's only one concept, then you could probably omit it; but that would just be a notational convenience) The alternative solution is that an update to X also updates the_value. This requires a bidirectional dataflow (or 2 dataflows in opposite directions). To prevent infinite loops in the update ripple, I'll need a guard in both directions: [concept]>the_value * the_value<--------------------------------------->X [concept]>X R64 Yes, it starts to get a bit more detailed; but I beleive that the process of making these decisions will bring out useful analysis information. You might even find yourself writing a technical note that explains the concepts surrounding this specific dataflow. Note that, in the second example, *both* attribtues are derived. It was this recurrent theme of mutual dependence that lead to me thinking of them as being entangled. But, as I said previously, I won't worry about the name. Of course, if the is no meaningful "the_value" to add to the local object then you would just need to navigate to all A's in the ADFD - just like your do at the moment. However, I think it is likely that you would find an appropriate attribute. Dave. p.s. if you don't like the UML style "*" to indicate cardinality, then you could use a double-headed arrow on the dataflow instead. Thats just a question of notation. -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. smf@cix.compulink.co.uk (Mike Finn) writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to David Whipp... > Basically, many of the concepts in OOD lead to > models that are more maintainable than an SM-OOA model, even > though the OOD approach uses elaboration. If you're saying UML is more maintainable than SM, then I find this hard to believe. With SM all you need to do is to change the OOA then click on the translate button to get code (I'm not too sure about your own translation methods :-)). With OOD you must use a manual process of elaboration. > The basic problem > is that all data access navigations are coded in ADFDs. This > means that objects may need to access non-local objects. In the OOA, this is what helps makes the problem understandable (by placing the model on a relational foundation). When implemented the data structures my look completely different. > This makes the models brittle. I would say: Partitioned correctly to allow for further change. > A complete SM model consists of an information model, objects > may have state models and states have ADFDs. The presence of > the ADFDs can severely hamper the modification of the > information model because the actions for one state may navigate > several relationships, including those that may be quite distant > from the object containing the ADFD. Does this problem really exist? If such a change is required, the OIM is the first part to update. I find most queries don't involve that many objects anyway. > This brittleness can be identified by the use of the Object > Access Model (OAM). This much undervalued view is the synchronous > counterpart of the OCM. When an object accesses many other > objects, this shows up as tangled clusters on the OAM. I think you're right about the OAM being undervalued and I admit it's not the first diagram I draw. :-) > As a > general rule, I like to be able to draw diagrams in 2 dimensions > with a minimum number of crossed lines. This is an interesting subject. For a few years now, I have made a point of avoiding any crossing lines on diagrams. I think following this rule has improved my models a great deal. Whether a crossing line is just an indication that a diagram is getting too large or whether the rule has some topological basis in physical reality, I don't know. I hope it's the latter. The rule is also a useful guide when reviewing other peoples work. By examining the objects or states joined to crossing lines you can quickly identify any problem areas. > Some people have found the need to extend the method to allow > object-based synchronous services. These are shown on the OCM > (not the OAM!) and are a recognition of the problem of non-local > data access. By packaging the access within a synchronous service > on a distant object, the modeller can eliminate many non-local > navigations and accesses. I too, find it incredulous that synchronous services are shown on the OCM. Until reading this post, I was not particularly aware that intra domain data access was a problem, although I knew of at least one tool vendor that provided it "in the course of consultancy". It seems to me that this sort of facility "helps" newbie (and not so newbie) users to use SM-OOA at the cost of down grading their own models in specific and degrading the method in general. Perhaps the real problem is that "analysis is hard" [Mellor's No.1 Rule] and SM-OOA is even harder. So it should not be surprising that users should wish for trapdoors to relieve their analysis torment. :-) > In OOA96, the concept of the mathematically dependent attribute > was introduced. This allows information to be represented on the > OIM that is not in "normal form". A mistake, IMO. I remember thinking at the time if its introduction was connected with Bridges or related to the development of BridgePoint. While there is certainly a need for *variables* to convey results of calculations from one part of a model to another, they should not be appear on the OIM. > The ability to put such > attributes in the OIM allows the modeller reduce some non-local > accesses; but not with the same power as the synchronous > service. I've no problem with extra variables in the implementation to make it go faster by accessing a previously calculated result. > A few months ago, PT clarified that the (M) attribute > must be updated by the model itself; there is no magic in the > metamodel that takes advantage of the derived nature of the > attribute. There can be no magic anywhere, it's all a bit of a slog. Mike -- Mike Finn Dark Matter | Email: smf@cix.co.uk Systems Ltd | Voice: +44 (0) 1483 755145 Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- Mike Finn wrote: > Responding to Dave Whipp... > > Basically, many of the concepts in OOD lead to > > models that are more maintainable than an SM-OOA model, even > > though the OOD approach uses elaboration. > > If you're saying UML is more maintainable than SM, then I find this > hard to believe. With SM all you need to do is to change the OOA > then click on the translate button to get code (I'm not too sure > about your own translation methods :-)). With OOD you must use a > manual process of elaboration. My statement should be clarified to read that a OOD design model is easier to maintain than an SM-OOA analysis model. Yes, we're trying to maintain different things; but that doesn't mean that SM-ers should ignore the maintainence issue for analysis models. The reason why an OOD model (it has nothing to do with UML) is easier to maintain than an OOA model is, fundamentally, that the people working on OOD have placed significant emphasis on developing design techniques to allow them to modify their code manually. SM people just press the "generate-code" button so they don't really care. Unfortunately, this carelessness seems to infect the OOA too -- and we do need to maintain that. I know that the purpose of OOA is to understand the problem; whereas the purpose of OOD is to construct software. One is discovery and one invention. It is, perhaps, natural that a process of discovery should not over-emphasise the construction (invention) of reusable models. However, it must be recognised the the SM method is intended to provide a framework for analysis that leads to good models. I would include maintainability in the definition of "good". I beleive techniques can be found that aid the maintainability of the model whilst not distracting the analyst. > > The basic problem > > is that all data access navigations are coded in ADFDs. This > > means that objects may need to access non-local objects. > > In the OOA, this is what helps makes the problem understandable (by > placing the model on a relational foundation). When implemented the > data structures may look completely different. This statement makes me think we may be talking at cross purposes. I am purely talking about maintainability of the model, not the implementation. Perhaps you were just checking my viewpoint though. > > This makes the models brittle. > > I would say: Partitioned correctly to allow for further change. A model that is correctly partitioned for change would ensure that any possible change would be made in only one place. Of course, this ideal is impossible to achieve; but we should at least try and minimise the scope of foreseeable change. This is the emphasis of OOD for making changes to code - and it has been fairly sucessful. Guidelines like dependency inversion the open/closed principle (OCP) DO allow a designer to *design* code that is resiliant to predictable changes. Now, analysis isn't design, but the ideal of one change in one place is still desirable. Currently in SM, one change in the OIM can lead to a much larger number of changes in the ASL. There is a hideous dependency that spans three modelling levels (OIM, state model, ADFD). My proposal for a DFD at just one level below the OIM (parallel to the OCM); and not bound to a specific object; is indended to reduce (though not, unfortunately, eliminate) this dependency. > > A complete SM model consists of an information model, objects > > may have state models and states have ADFDs. The presence of > > the ADFDs can severely hamper the modification of the > > information model because the actions for one state may navigate > > several relationships, including those that may be quite distant > > from the object containing the ADFD. > > Does this problem really exist? If such a change is required, the > OIM is the first part to update. I find most queries don't involve > that many objects anyway. Such queries are common enough for a specific chained-navigation syntax in most ASLs, including SMALL. And every navigation is a dependency. As to whether the problem really exists, well, I'm only one data- point. However, if you ever work in an evironment where the version control of the requirements spec uses irrational numbers , then you'll see just how resiliant your models are(n't). > This is an interesting subject. For a few years now, I have made a > point of avoiding any crossing lines on diagrams. I think following > this rule has improved my models a great deal. Whether a crossing > line is just an indication that a diagram is getting too large or > whether the rule has some topological basis in physical reality, I > don't know. I hope it's the latter. The rule is also a useful > guide when reviewing other peoples work. By examining the objects > or states joined to crossing lines you can quickly identify any > problem areas. I think that crossed lines are a visible effect of dependency in the diagrams. The more dependency you have, the more lines are needed to express it. If you have more than 4 fully interconnected things then you have crossed lines. The more things you have, the fewer connections you are allowed before you start crossing lines. So by managing your lines, you are managing the dependencies. Thus the models improve. As soon as you introduce hierarchy into a model (such as the 1:M relationship between Objects in the OIM and ADFDs, you introduce mechansisms for hiding the dependencies through encapsulation. A classic example is the hierarchical DFD of years gone by. Another example is the hierarchical state machine (with superstates) - very useful, but dangerous. > Perhaps the real problem is that "analysis is hard" [Mellor's No.1 > Rule] and SM-OOA is even harder. So it should not be surprising > that users should wish for trapdoors to relieve their analysis > torment. :-) Yep, and all such trapdoors should to subjected to stringent critical review before they are accepted into the method. When a vendor adds a feature, it is often added as a differentiating feature (look, our tool has more features) without regard for its methodological soundness. > > In OOA96, the concept of the mathematically dependent attribute > > was introduced. This allows information to be represented on the > > OIM that is not in "normal form". > > A mistake, IMO. I remember thinking at the time if its introduction > was connected with Bridges or related to the development of > BridgePoint. While there is certainly a need for *variables* to > convey results of calculations from one part of a model to another, > they should not be appear on the OIM. To say it was a mistake is probably a bit too stong. Most definitely though, it was not properly thought through. Does your comment about "variables" indicate that you'd prefer my DFD concept if its nodes weren't attributes? Or are you just binging in implementation details again? > There can be no magic anywhere, it's all a bit of a slog. There's plenty of "magic" in SM: you generate events and they get delivered; you draw a state model and it transitions; you construct an ADFD and it executes, etc. Compare this with the UML world (well, it does have state machine generators). Any technology sufficiently advanced... Dave. -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. Allen Theobald writes to shlaer-mellor-users: -------------------------------------------------------------------- I have the opportunity to delve into the realm of embedded systems. Are there OOA IM abstractions for such HW/SW thingys as: Analog->Digital Converters Digital->Analog Converters UARTS BUS Processor Interrupt Service Routines Kind Regards, Allen Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- Allen Theobald wrote: > I have the opportunity to delve into the realm of embedded systems. > Are there OOA IM abstractions for such HW/SW thingys as: > > Analog->Digital Converters > Digital->Analog Converters > UARTS > BUS > Processor > Interrupt Service Routines Do you mean: "is it possible to build an SM model of these things"; or do you mean "is it possible to use them in an abstract way within an SM model"? The answer to both questions is "yes"; but the details differ. The first alternative is quite simple in principle.. The hardware components are devices which mode data around under the control of state machines. Obviously SM can model that. The only issue really is where to put the bridges. In the case of a DAC or ADC, I find it is best to model the digital (control) part in SM; and bridge to the analogue bit. It very much depends on how you indend to model the analogue values. (There is also the question of how you abstract the programmer's interface: if you model the programmer's registers in the same domain as the core behaviour then you'll have problems). The second alternative is a bit more interesting: how to use the hardware components transparently. The answer is to think of them as architectural mechanisms. For example, UARTS move data around; processsors execute code. Some DMACs do both. You can dream up all sorts of scenarios for utilising such components. You can derive some architectural use cases from the method (for example, synchronous data access and event delivery); but you should think of the architecture as "the thing that supports the application", rather than "the thing that I'm translating onto". Automated resource allocation (in the translator) is an interesting subject: there's plenty of scope for research. Dave. -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. Allen Theobald writes to shlaer-mellor-users: -------------------------------------------------------------------- That is very interesting... What you said about architectural mechanisms sparked another question about architecture and OSes and domain pollution. Is the OS really accessed via bridges and wormholes? Say I have a "CreateThread(pfn*) function call in my OS. Is calling that directly a no-no? Kind Regards, Allen "Lynch, Chris D. SDX" writes to shlaer-mellor-users: -------------------------------------------------------------------- > Allen Theobald wrote: > >Is the OS really accessed via bridges and wormholes? Say I have >a "CreateThread(pfn*) function call in my OS. Is calling that directly >a no-no? If by "calling directly" you mean referencing it from your state model or ADFD, you are correct: that is not allowed. At the lower levels, in generated code for actions, or in the system startup routine, such a call might be found. Keep in mind that the ADFD (i.e. state action) exists in the context of an OOA "virtual machine", where anything which might be provided by a thread in an ordinary computer program is assumed by the execution rules of OOA and realized by the architecture. -Chris ------------------------------------------- Chris Lynch Abbott Ambulatory Infusion Systems San Diego, Ca LYNCHCD@HPD.ABBOTT.COM ------------------------------------------- lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > > In my experience the number of attributes that are derived is a tiny > > fraction of all attributes in the domain; we have a number of domains > > with no derived attributes at all. > > Snap. My proposal implies changing this. OK, I think the main piece I was missing is that all attributes can become derived. > I envisage that most reads would be come local; and that most transforms > would be moved to the DFD. non-local writes will still require > navigation. Would a derived local attribute be declared on the OIM? Or would you simply map local transient variables in the ADFD as derived attributes in the DFD? [I know OOA96 said all transient variables had to be attributes, but I think that is unworkable for the reasons that came out of the reviews.] This seems to be my last problem. I don't see why the non-local writes still need to be navigated, especially if a local derived attribute is defined in the OIM. Doesn't the change-concept embody all the necessary information needed to properly navigate the DFD? -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- Allen Theobald wrote: > What you said about architectural mechanisms sparked another > question about architecture and OSes and domain pollution. > Is the OS really accessed via bridges and wormholes? Say I have > a "CreateThread(pfn*) function call in my OS. Is calling that directly > a no-no? Its always wrong to call it directly. Calling via a wormhole/bridge may be acceptable in some circumstances (though I can't think of a suitably contrived example just now). In general though, some OS facilities will be used via wormholes; and others will be used as architectural mechanisms. The client domain never knows that it is using an OS facility -- its just an OOA model that asks for services via wormholes. If you want to use it as an architectural mechanism then you'll want to tie it to some part of the OOA-of-OOA meta model (this is actually a slightly simplistic view, but its usable. A better view is to tie OS services to a part of the OOA-of-Architecture). For example, you may decide that all events will be handled by starting a new thread; and that the thread will terminate when the target state's actions have executed (this is probably not a good idea, but its OK for an example). In this case, you would map the "generate" in the OOA to "CreateThread" in the OS, with whatever additional code is needed to make it work. Similarly, you'd add a DestroyThread call at the end of the state action. This mapping would be performed within the translator. However, I would strongly recommend that you properly analyse the architecture when writing the translator. Ideally, you'll construct an OOA-of-ThreadedArchitecture; and construct a set of mappings from OOA-of-OOA to the architecture. (If you don't want to simulate the architecture then an OIM-of-Architecture is sufficient, and a lot easier to produce). When you are happy with the architecture should you can attempt to map it onto the implementation environment. I would recommend that you don't follow a strict waterfall when doing this; its much easier develop the architecture + translator + code generator in small iterative loops. Its also better if you create test data for the architecture that is not related to SM-OOA. Many people, in practice, don't do proper architectural analysis (I tend to be one of them). Their analysis done is similar to the UML "conceptual" view - detail is patchy (only for the interesting bits) and then the model is discarded for the actual implementation (it may be replaced by a detailed implementation model; but thats for the implementation of a merged translator+code_generator). This elaborational approach is simpler than the SM-RD one; but is isn't really in the spirit of the method. Also, it doesn't allow you to simulate your architecture. Dave. p.s. if you want to get into the question of what an "OOA-of- Architecture" looks like, then I'm happy to do so; but I'll choose something much simpler than multi-threading. -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Theobald... > I have the opportunity to delve into the realm of embedded systems. Beware. Thear Be Wilde Beasties Thear. > Are there OOA IM abstractions for such HW/SW thingys as: > > Analog->Digital Converters > Digital->Analog Converters > UARTS > BUS > Processor > Interrupt Service Routines Only in the mind of the analyst. I suspect this question stems from S-M's reputation as a methodology for R-T/E systems. The basis for this reputation is not customized notational artifacts (e.g., Bus), but the reliance upon finite state machines, asynchronous messaging, and other mathematical formalisms that tend to be closely associated with R-T/E systems. For example, the notation and the methodology both _assume_ asynchronous communications and they hang together expressly to support this -- the synchronous communications that are implicit in other notations and methodologies are just a special case of the more general asynchronous situation. Bottom line: you supply the semantics for the notational constructs. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > Would a derived local attribute be declared on the OIM? Or would you > simply map local transient variables in the ADFD as derived attributes > in the DFD? [I know OOA96 said all transient variables had to be > attributes, but I think that is unworkable for the reasons that came > out of the reviews.] I was thinking that they would be treated as (M) attributes for the OIM. i.e. they would appear on it. The problem with transient attributes is that they were valid only for a specific state; and then only during entry to that state. In contrast, derived attributes always have a valid value. The other reason for showing them is that, to omit them, would be to lose important information. Consider the mutually dependent triangle of mass, volume and density. They may all be derived attributes, but you wouldn't want to omit them all. > This seems to be my last problem. I don't see why the non-local writes > still need to be navigated, especially if a local derived attribute is > defined in the OIM. Doesn't the change-concept embody all the necessary > information needed to properly navigate the DFD? If you can convert it into a local access then no navigation is needed. But I did not want to be too radical in my proposal. If you want to extend it to say that all non-local writes are disallowed, then be my guest. The type of situation where I would envisage a non-local write would be for the idiom of held events. This standard construct involves finding a remote instance, sending an event to it and setting a flag to say that the event has been sent. You could set the flag locally; but, having navigated to find where to send the event, performing a non-local write on that instance is not a major problem. Dave. p.s. you missed my "deliberate" mistake in the execution semantics of DFDs. In one place, I spoke of the "change-ripple" spreading from the write; in another, I spoke of an attribute "driving" a value onto a dataflow (i.e. just like hardware). The two are almost, but not quite, equivalent. (I prefer the latter, but implementors may prefer the former). The two are made equivalent by the addition of a simple constraint: The sensitised graph must be a tree, starting at the written attribute. -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Theobald... > What you said about architectural mechanisms sparked another > question about architecture and OSes and domain pollution. > Is the OS really accessed via bridges and wormholes? Say I have > a "CreateThread(pfn*) function call in my OS. Is calling that directly > a no-no? I agree with Whipp in that the answer is usually None Of The Above. But let me put a somewhat higher level justification on this. One way to view this is that the OS is a realized domain. As such other domains are prohibited from knowing anything about its internals so they must communicate with it via bridges. However, the OS is a pretty low level domain and it is unlikely that most domains would talk to it directly via a bridge. Most domains are at a much higher level of abstraction so that OS calls are invoked when the translation creates a specific implementation, which was Whipp's point. Your thread example is a good example of this. Creating a thread is a specific implementation that may not even be available on some platforms. Moreover, other mechanisms (e.g., semaphores, mailboxes, RPCs) might be viable alternatives. The application's level of abstraction would not think in terms of threads directly. A common situation where a thread might be invoked is where one domain wants data to be returned synchronously but the domain supplying the data must process events to calculate it, so it has to return the data asynchronously. The requesting domain just wants the synchronous data and doesn't care about the other domain's problems. The architecture needs to resolve this in the bridge. One way to do it is by hibernating/waking a thread in the bridge. Another might be a periodic polling loop. The key issue is that in the OOA the domain is saying, "Gimme data!" but this is being mapped in the translation into a specific OS implementation. Now let's carry this one step further. Suppose an action is reading data from another object's instance and that instance is is not on the same machine. In this case the architecture might map thread processing into the read accessor itself to handle the cross platform communication. In this situation no wormhole is explicitly involved at all -- just a simple read accessor. The bottom line is that the OOA is thinking about things in an abstract way, such as an accessor, but the translation maps this abstraction into implementation. Since an OS is about as pure implementation as you can get, the OOA should never want to think in terms of OS processes. As a final example, consider that you need a date stamp. So your action invokes a wormhole to get the current date. Superficially this looks like you are invoking the OS Date function with a wormhole wrapper. However, you really aren't. Your action is merely requesting the date in a particular format from some unspecified external entity. It is only coincidental that the OS happens to have a Date function that the wormhole maps directly into. In the foreseeable future this will no longer be true -- that wormhole will have to invoke a library function to calculate the date because all OSes will only provide dates in ABG format (Anno Bill Gates). -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > I was thinking that they would be treated as (M) attributes for the OIM. > i.e. they would appear on it. The problem with transient attributes > is that they were valid only for a specific state; and then only during > entry to that state. In contrast, derived attributes always have a valid > value. > > The other reason for showing them is that, to omit them, would be > to lose important information. Consider the mutually dependent > triangle of mass, volume and density. They may all be derived > attributes, but you wouldn't want to omit them all. I am just somewhat bothered by the OIM clutter that is there simply to support the notation. For mass/volume/density this is, indeed, important information because there truly is a derived aspect to the relationship. But these are few and far between. When you are simply eliminating ADFD identifier accessor chains, it seems to me that the "derived" attributes are highly artificial -- they mostly just indicate a link that is already implicit in the OIM relationship chain. More importantly, the derived attributes don't have a meaning in the problem space, other than being synonyms. In the OIM I would regard this as distracting. I think I would prefer introducing a special accessor, say a navigation accessor, whose first argument identifies the DFD starting point (i.e., a DFD label or change-concept). The data arguments could be positionally mapped to the DFD data elements or a replacement notation (dfd_X = arg_Y) could be used. Now it would not matter whether the local data was transient or attributes, even even attributes from yet another object. The navigation accessor would only be used to eliminate ADFD accessor chains and some simple transforms; if the data were truly derived, then a normal accessor would be used. The fact that the navigation accessor is syntactically recognizable is sufficient to explain what is going on at the level where it is of interest -- in a specific action. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Allen Theobald writes to shlaer-mellor-users: -------------------------------------------------------------------- How do you avoid producing state models that, when translated, produce circularities (recursion) in the code it generates? I don't have a specific example, but it occurred to me that if the 'action' upon arriving in the state 'A' generated an event that took it to state 'B', and which subsequently generated an event that took it back to 'A', that the recursion could potentially cause stack overflows in the implementation. Of course for that to happen the actions *and* events would have to be synchronous function calls. I'm sure that violates some aspect of the model. Kind Regards, Allen Tim Wadsworth writes to shlaer-mellor-users: -------------------------------------------------------------------- > Allen Theobald writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > How do you avoid producing state models that, when translated, produce > circularities (recursion) in the code it generates? > > I don't have a specific example, but it occurred to me that if the > 'action' upon arriving in the state 'A' generated an event that took > it to state 'B', and which subsequently generated an event that took > it back to 'A', that the recursion could potentially cause stack > overflows in the implementation. Upon completion of an action, if the implementing function returns to its caller before the next event is processed, there is no recursion in such cases. > > Of course for that to happen the actions *and* events would have to be > synchronous function calls. I'm sure that violates some aspect of the > model. > > Kind Regards, > > Allen > > lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Theobald... > How do you avoid producing state models that, when translated, produce > circularities (recursion) in the code it generates? > > I don't have a specific example, but it occurred to me that if the > 'action' upon arriving in the state 'A' generated an event that took > it to state 'B', and which subsequently generated an event that took > it back to 'A', that the recursion could potentially cause stack > overflows in the implementation. > > Of course for that to happen the actions *and* events would have to be > synchronous function calls. I'm sure that violates some aspect of the > model. There are several classes of architectures. One of them is a synchronous architecture where events can be, indeed, mapped into synchronous action function calls. In this case such recursion would be a problem that the architect would have to deal with when defining the stack depth. This is a perfectly valid architecture, though, since the synchronous case is just a special case of asynchronous where the events always occur in a deterministic order. [Note that this is a general problem with OO because objects talk directly to one another rather than through functional decomposition trees and because OO breeds lots of small functions. That is, OO tends to execute depth-first while functional decomposition tends to execute breadth-first. The methodologies that define objects based upon functionality, though, tend to have lots of Controller and Manager objects that effectively reintroduce functional decomposition.] If you are not using a synchronous architecture, recursion is not a problem because events are usually processed through a central event queue manager. When an event is generated, an entry is simply placed in the queue data store. The queue manager processes that event when it is done with the current event. In practice this means that control always returns to that queue manager between events (i.e., the call graph is a spider with the queue manager lurking in the center with tentacles extending out to the object actions). Things get stickier if one assumes a simultaneous view of time within a domain. This allows multiple actions to be executed simultaneously (e.g., in an multiprocessor environment). Essentially you have multiple threads running together, each taking up stack space. In practice there are usually restrictions that limit the number of actions that can be processed simultaneously (e.g., number of processors, too many instances' data will be locked, etc.). While blowing the stack is possible, it is usually unlikely. If it is an issue, then there are defensive mechanisms the architecture can employ, depending upon the environment. BTW, if you have a synchronous situation, you might still want to use the simple asynchronous architecture. Hopefully the event queue manager and accomplices are highly optimized so the overhead is not very much and you don't have to even think about blowing the stack. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com "Yeager, John D (John)" writes to shlaer-mellor-users: -------------------------------------------------------------------- Don't get me started again..... This has been discussed before. I think you can find the archives at the Project Technology web site. The quick answer is that for a synchronous event architecture to work, you need to handle such looping cases. The solution is to refuse to dispatch the event synchronously when the object is already active. This requires that you have *both* a synchronous and asynchronous architecture, with the asychronous architecture handling the failure cases. An optimization is to mark the object no longer busy just before sending an event as the last act in the action routine. This allows harmless recursion. Note that you then have to disallow synchronous dispatches if an object is either busy *or* has queued asynchronous events from the same sender. The alternative is to perform an off-line architecture which finds these ordering problems and reimplements states to solve them. This can be quite non-trivial. One starts with the observation that sending one final event is always safe. Beyond that, it can get quite difficult, especially if one goes into a loop sending events. I once built a synchrous dispatcher which used a simplified algorithm: a) Introduce an architectural state of "busy" b) All objects are placed in busy state as the dispatch c) New state is posted just before sending a final event, or at end of action if last act is not generating an event d) The synchrous state handler for the busy state enqueues the event for asychronous handling e) The synchronous dispatcher calls the queuing handler if there are any events queued for the target instance (less precise than testing for already queued events from same sender) In this case, asynchronous events were also needed when the dispatching context (task priority, interrupt handler, etc) of the sender was different than the receiver. John D. Yeager Lucent Technologies, Bell Labs Internet Business Systems Business Communications Systems mailto:johnyeager@lucent.com Room 1K-201 voice:1-732-817-3085 101 Crawford Corners Rd., PO Box 3030 fax:1-732-817-3085 Holmdel, NJ 07733-3030 Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > I am just somewhat bothered by the OIM clutter that is there simply to support > the notation. For mass/volume/density this is, indeed, important information > because there truly is a derived aspect to the relationship. But these are > few and far between. When you are simply eliminating ADFD identifier accessor > chains, it seems to me that the "derived" attributes are highly artificial -- > they mostly just indicate a link that is already implicit in the OIM > relationship chain. More importantly, the derived attributes don't have a > meaning in the problem space, other than being synonyms. In the OIM I would > regard this as distracting. I am not sure that they are "artificial" - just non-normalised. I frequently find that, when brainstorming an object's attributes, I come up with attributes which are found to belong to another object. For example, if analysing a "can of soup" object I might think that it has a "price" attribute; only to find, later, that the price actually belongs in a specification object. I tend to find this type of attribute shuffling happens quite often before a good, normalised, OIM is achieved. If such attributes were kept as "derived" attributes, then I would not call them "artificial". But you are right that they would increase the number of attributes on the OIM. And it is possibly true that you would end up introducing a few that do seem artificial. I would liken this the the analyst who, having introduced an M:M relationship, is forced to introduce an "artificial" associative object. It isn't really artificial; that label is a result of the analyst's mindset. One of the things I like about my proposal is its simplicity. It adds only 3 objects to the OOA-of-OOA implied by OOA'96. I can think of several, more complex, variations; but the additional complexity doesn't seem to add anything. You may be interested in 3 alternatives that I dismissed before presenting my "simple" proposal: One posibility that doesn't add too much complexity is that intermediate nodes on the DFD (those that are neither read nor written from any ADFD) could be omitted from the OIM. This would introduce an extra 2 objects into the OOA-of-OOA because we would need a node object and two subtypes (one of which is the "attribute"). Another possibility is to separate the derivations from the attributes. This is similar to the previous alternative; but the intermediate nodes would be apparent by a dataflow flowing directly from transform to transform. The big problem with this idea is that even the simple triangular formulae (M=DV) would require crossed lines on the DFD - which seems very messy. Its biggest advantage would be that a transform could have multiple outputs. The third possibility is an extention of the previous one. The idea is that is special transform process is introduced which can apply its formulae in different rearrangements. Each of its variables would be available as both inputs and outputs; it would calculate each output in terms of the other inputs. So, "V=IR" would also output "I=V/R" and "R=V/I". The "change concept" would sensitise the appropriate output flow(s). These three alternatives, while intersting, do not really achieve very much more than the proposal I presented; and they are more complex. The SM philosophy has never been to go for the most powerful concept. It has been to go for the simple, yet adequate. Thus state models are minimalist Moore machines. More powerful varients exist but they aren't necessary. > I think I would prefer introducing a special accessor, say a > navigation accessor, whose first argument identifies the DFD > starting point (i.e., a DFD label or change-concept). The data > arguments could be positionally mapped to the DFD data elements > or a replacement notation (dfd_X = arg_Y) could be used. Now it > would not matter whether the local data was transient or > attributes, even even attributes from yet another object. To use such an operator would require a "hidden node" as the start point of the DFD chain. It is needed to allow a change-concept to be attached to the relationship navigation. As soon as you have such a node, the navigation operator becomes almost normal write accessor. The following example should clarify. If you have 2 objects, A and B, connected by relationship R1; then the DFD may say: [default]>B.attr A.---------------------->B.attr R1 and Action in A could say: ~value [default]> B.attr (or even, if you assume that a default needn't be specified, "~value > B.attr"). Note that "B" in these SMALLish snippents is an object name, not an object reference. There is the issue of identifying the hidden nodes. In this case, it doesn't matter; but in more complex scenarios there may be multiple hidden nodes at each object. If you name information associated with a specific object then this seems, to me, to be an attribute. But I will admit that the idea of using the change-concept to specify a non-local write does seem to be quite a nice idea. You would have constrain the usage to attributes with no associated derivation (or whose derivation is the Identity transform) > The navigation accessor would only be used to eliminate ADFD > accessor chains and some simple transforms; if the data were > truly derived, then a normal accessor would be used. The fact > that the navigation accessor is syntactically recognizable is > sufficient to explain what is going on at the level where it is > of interest -- in a specific action. It does allow the naviagation to be decoupled from its relationship name (or referential attribute(s)). This eliminates the dependency and thus enhances maintainability. I do not think that a specific operator is necessary; the fact that a write accessor specifies an object name is sufficient to distinguish it. However, I haven't thought through this modification fully; there may be a hidden gotcha somewhere. (I would also like to think about whether events could be directed using a similar approach, on the OCM, to eliminate all navigations from the ADFDs ... perhaps not.) The question is, do people think a proposal along these lines would enhance the method; or is it just an unnecessary "nice idea"? I beleive that it would enhance the maintainability of the models and simplify the ADFDs whilst making only a small change to the meta model. It doesn't break anything in the existing method and provides a formal basis for the recently introduced (M) attribute. Dave. p.s. If anyone can think of a better word for "change-concept" then I'll be delighted to here it -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > I am not sure that they are "artificial" - just non-normalised. > > I frequently find that, when brainstorming an object's attributes, I > come up with attributes which are found to belong to another object. > For example, if analysing a "can of soup" object I might think that > it has a "price" attribute; only to find, later, that the price > actually belongs in a specification object. > > I tend to find this type of attribute shuffling happens quite often > before a good, normalised, OIM is achieved. If such attributes were > kept as "derived" attributes, then I would not call them "artificial". > > But you are right that they would increase the number of attributes > on the OIM. And it is possibly true that you would end up introducing > a few that do seem artificial. I would liken this the the analyst who, > having introduced an M:M relationship, is forced to introduce an > "artificial" associative object. It isn't really artificial; that > label is a result of the analyst's mindset. I have to disagree on this. I could argue that I don't like them for all the reasons you have previously offered against (M) attributes. But let me try my own spin... First, the OIM is _supposed_ to be normalized. The only exceptions are referential attributes and (M) attributes. While the (M) attribute is defined in terms of a mathematical formula the resulting "derived" attribute resulting from simplifying the ADFD is related to the real attribute by simple equality in most cases. (When it isn't you have merely moved ADFD transforms to the DFD.) Basically I am arguing that (M) attributes characteristically have a non-trivial formula associated with them that justifies their presence in the OIM; otherwise they would violate normal form. Second, (M) attributes exist in the OIM only because there is a problem space need for them. The "derived" attributes from simplifying the ADFD are there purely to accommodate the DFD notation. That is, they are relevant to the dynamic representation rather than the static representation. Put another way, at the level of abstraction of the OIM we don't care about manipulations at the ADFD level of abstraction. Finally, I don't buy the associative object analogy. The associative object is required by the relational data model on which the OIM is based. But these new "derived" attributes are not necessary to the relational data model and one could argue that by violating normal form they are contrary to it. > These three alternatives, while intersting, do not really > achieve very much more than the proposal I presented; and they > are more complex. The SM philosophy has never been to go for > the most powerful concept. It has been to go for the simple, > yet adequate. Thus state models are minimalist Moore machines. > More powerful varients exist but they aren't necessary. I see this as another way of making my second point above. B-) > There is the issue of identifying the hidden nodes. In this case, it > doesn't matter; but in more complex scenarios there may be multiple > hidden nodes at each object. If you name information associated with > a specific object then this seems, to me, to be an attribute. I see it more as a DFD label that completes the link to the ADFD. But I am not sure this is necessary. Suppose the hidden node was always identified with the object with the action (as in your example). Now I would think that the multiple cases from complex FSM actions could be represented by different DFD data flows with different change-concepts > The question is, do people think a proposal along these lines > would enhance the method; or is it just an unnecessary "nice idea"? > I beleive that it would enhance the maintainability of the models > and simplify the ADFDs whilst making only a small change to the > meta model. It doesn't break anything in the existing method and > provides a formal basis for the recently introduced (M) attribute. I like it for several reasons. I agree it would make the ADFDs more maintainable when the same non-local data is accessed in different actions. I also think it would make ADFDs simplerd. One of the annoyances of ADFDs is that they are cluttered so that adding a process becomes a half hour exercise in the CASE tool to make room at the right place. Finally, I think it would make ADFDs more readable. When we were doing ADFDs almost half the processes were there simply to access identifiers and this obscured the flow of control. [BTW, I think I would be reluctant to move a lot of transforms, other than simple conversions, to the DFD for this reason. When looking at the ADFD I usually am interested in how the data is being manipulated -- I just don't care about the mechanics of relationship navigation.] Aside from the OIM attribute issue, I only have two other reservations that should be easily resolved with a couple of trials. The first is that when reading the ADFD the actual access is obscured -- you might have to go to the DFD to figure out where values were going/coming. In your SMALLish notation you have already addressed this (e.g., ~value > B.attr in your example) -- I am just not sure it would always be that obvious. The second reservation is that the DFD itself might get humongous since it contains all access paths for all actions. Most of the coupling in S-M domains comes from data accesses so I suspect the DFD may get combinatorially large. There may be ways to handle this (e.g., multiple entry points to relationship chains or shorthand notations for cases where there is only one path). It is also possible that it isn't important. The DFD in this scheme is really just a supporting document to capture navigation trivia and one would probably not be looking at it with the Big Picture in mind. That is, once it is created you might never look at it again until you change a relationship or add an attribute. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com smf@cix.compulink.co.uk (Mike Finn) writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Dave Whipp... > Mike Finn wrote: > > Responding to Dave Whipp... Close enough, Dave. :-) > My statement should be clarified to read that a OOD design model is > easier to maintain than an SM-OOA analysis model. Yes, we're trying > to maintain different things; but that doesn't mean that SM-ers > should ignore the maintainence issue for analysis models. Which variant of OOD do you have in mind? Is it a pure OOD method with no OOA front end? I think a SM-OOA model should be easier to maintain than any OOD model for any given change in the probelem space. This is because a SM-OOA is pitched at a higher level of abstraction. What this really means is that it's a more concise model, so there's less to change than for the OOD model. Plugging an abstract SM-OOA model into an architecture is a comparatively trivial activity. I agree that SM-ers should not ignore the maintainence issue and think the idea of "One fact in one place" should be the guiding factor. We just need to agree on the facts. :-) > The reason why an OOD model (it has nothing to do with UML) is > easier to maintain than an OOA model is, fundamentally, that > the people working on OOD have placed significant emphasis > on developing design techniques to allow them to modify their > code manually. I just chose UML as an example. I can't think of a new purely OOD method, only old ones like HOOD or Booch83. You have to ask *why* people working on OOD have placed significant emphasis on developing design techniques to allow them to modify their code manually. I'm assuming you're referring to Round-trip Engineering as found in tools like Rose and Together/C++, which I find slightly absurd. > SM people just press the "generate-code" button > so they don't really care. Unfortunately, this carelessness > seems to infect the OOA too -- and we do need to maintain that. Hey! I care. And my OOA remains infection free. :-) > > > The basic problem > > > is that all data access navigations are coded in ADFDs. This > > > means that objects may need to access non-local objects. > > > > In the OOA, this is what helps makes the problem understandable (by > > placing the model on a relational foundation). When implemented the > > data structures may look completely different. > This statement makes me think we may be talking at cross purposes. > I am purely talking about maintainability of the model, not the > implementation. Perhaps you were just checking my viewpoint though. What I'm trying to say is that, for me at least, data access navigations coded in ADFDs is not a problem. Forget my comment about data structures. I would like to think my viewpoint checking technique would be far more subtle than that. :-) > Guidelines like dependency inversion the open/closed principle (OCP) > DO allow a designer to *design* code that is resiliant to > predictable changes. Now, analysis isn't design, but the ideal > of one change in one place is still desirable. I'm not familiar with dependency inversion the open/closed principle. > Currently in SM, one change in the OIM can lead to a much larger > number of changes in the ASL. There is a hideous dependency that > spans three modelling levels (OIM, state model, ADFD). My proposal > for a DFD at just one level below the OIM (parallel to the OCM); > and not bound to a specific object; is indended to reduce (though > not, unfortunately, eliminate) this dependency. I would describe the dependency between modelling levels as occurring quite naturally. The DFD you propose is a significant departure from the idea that the object is at the center of things. > > > In OOA96, the concept of the mathematically dependent attribute > > > was introduced. This allows information to be represented on the > > > OIM that is not in "normal form". > > > > A mistake, IMO. I remember thinking at the time if its introduction > > was connected with Bridges or related to the development of > > BridgePoint. While there is certainly a need for *variables* to > > convey results of calculations from one part of a model to another, > > they should not be appear on the OIM. > To say it was a mistake is probably a bit too stong. Most definitely > though, it was not properly thought through. There seems to be an implicit connection in the OOA96 Report between mathematically dependent attributes and transient data items. Since transient data items must now appear as attributes on the OIM you can just put them in as mathematically dependent attributes. That solves the perceived problem regarding their type declaration in the OOA. Unfortunately, this *solution* degrades the OIM. > Does your comment about "variables" indicate that you'd prefer my DFD > concept if its nodes weren't attributes? Or are you just binging in > implementation details again? Sorry, bad choice of name. I use the name Transient Attribute for data items that appear on event flows but do not appear on the OIM. > Any technology sufficiently advanced... Exactly. That's why it's not magic to me. :-) Mike -- Mike Finn Dark Matter | Email: smf@cix.co.uk Systems Ltd | Voice: +44 (0) 1483 755145 Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- Mike Finn wrote: > Responding to Dave Whipp... > > My statement should be clarified to read that a OOD design model is > > easier to maintain than an SM-OOA analysis model. Yes, we're trying > > to maintain different things; but that doesn't mean that SM-ers > > should ignore the maintainence issue for analysis models. > > Which variant of OOD do you have in mind? Is it a pure OOD method > with no OOA front end? No method is particular. Most OOD books that I've read agree on a set of core concepts - primarily associated with the the separation if interface from implementation. However, if forced to quote anyone to back up any statements, then I'd probably use either Martin Fowler or Robert Martin. For example: > > Guidelines like dependency inversion the open/closed principle (OCP) > > DO allow a designer to *design* code that is resiliant to > > predictable changes. Now, analysis isn't design, but the ideal > > of one change in one place is still desirable. > > I'm not familiar with dependency inversion the open/closed principle. Please forgive the type. References can be found at http://www.oma.com/Publications/publications.html. specifically, Dependency invertion: http://www.oma.com/PDF/dip.pdf Open/Closed principle: http://www.oma.com/PDF/ocp.pdf It should be noted that these are just two examples from many. > I think a SM-OOA model should be easier to maintain than any OOD > model for any given change in the probelem space. This is because a > SM-OOA is pitched at a higher level of abstraction. What this > really means is that it's a more concise model, so there's less to > change than for the OOD model. Plugging an abstract SM-OOA model > into an architecture is a comparatively trivial activity. Ah, that is the difference between us. You measure maintainability as the ratio of the change in the problem space to the change in the thing you're maintaining (or possibly even to the code). I measure it as the ratio of the direct change in the thing I'm maintaining to the knock-on effects in that thing. So, if I need to add a new object and then move some attributes from an existing object into it, then that would be the direct change. However, the consequences of that change -- i.e. changing navigations in the ASL -- can be reduced if those navigations are localised in a single dataflow diagram. > I agree that SM-ers should not ignore the maintainence issue and > think the idea of "One fact in one place" should be the guiding > factor. We just need to agree on the facts. :-) One fact in one place is derived from the more fundamental rule: one change in one place. If one fact in two place can be kept consistant through mechanisms in the formalism then the decoupling that the redundancy provides may be beneficial, if used correctly. > You have to ask *why* people working on OOD have placed significant > emphasis on developing design techniques to allow them to modify > their code manually. I'm assuming you're referring to Round-trip > Engineering as found in tools like Rose and Together/C++, which I > find slightly absurd. No, I'm not thinking of round-trip enginneering. Just good old fashioned manual coding with manual diagrams. And the reason is obvious - its because that's what they are doing. > What I'm trying to say is that, for me at least, data access > navigations coded in ADFDs is not a problem. I will have to accept that statement. However, other people have agreed that there is something not-quite-right about it. Its probably more noticable if you use ADFDs rather than ASL. However, even in the various ASLs, the dependency is still there. > I would describe the dependency between modelling levels as > occurring quite naturally. The DFD you propose is a significant > departure from the idea that the object is at the center of things. It is very natural. Any system where the details are subordinate to the strategy naturally has these dependencies. I do not agree that the DFD diminishes the idea of the object. If anything, it enhances it. A current object has to deal with issues that are nothin to do with it. If I throw you a ball, then I assume that the laws of motion will work. Neither you, nor I, nor any other object in the domain of ball games needs to mediate the journey of the ball. By freeing the objects of that responsibility, it is much easier to understand the real role of the object. I think of dataflow as something that "just happens"; and transforms naturally respond to that flow. > There seems to be an implicit connection in the OOA96 Report between > mathematically dependent attributes and transient data items. I see no such connection. All OOA'96 says is that, when considering the data as objects; and not as relationships in normal form; then it is common to discover properties of objects which exhibit mathematical dependence. In many cases this dependency may break 4th normal form. The rule of proper attribute is thus re-worded to say that if an attribute A is dependent on a set of attributes P, then either A is mathematically dependent on P; or P is an identifier. The tagging of an attribute with (M) is pretty much arbitrary in the case of mutually dependent characteristics (i.e. any rearrangeable formulae). The examples given are the mass,volume, density example that I have used; and a simple crate with width, height, depth and volume. Nowhere is anything mentioned about transient attributes. There is no concept of automatic update. The formulae or algorithm are noted in the attribute description and it is up to the modeller to ensure that the value is correct. > Since > transient data items must now appear as attributes on the OIM you > can just put them in as mathematically dependent attributes. That > solves the perceived problem regarding their type declaration in the > OOA. Unfortunately, this *solution* degrades the OIM. Transient attributes are different to derived attributes (even the intermediate nodes on my DFD). A derived attribute only has a value during the life of an action. Further more, it can have multiple values at any one time (the same action can be executed in different instances; but the transient attribute may refer to the same instance in each case -- if this isn't clear then I'll give an example) Thus a transient attribute could break first normal form; whereas a derived attribute will, at worst, break 4th normal form. I tend to regard the lower numbered forms as more important than the later ones. Dave. -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > First, the OIM is _supposed_ to be normalized. The only exceptions > are referential attributes and (M) attributes. While the (M) attribute > is defined in terms of a mathematical formula the resulting "derived" > attribute resulting from simplifying the ADFD is related to the real > attribute by simple equality in most cases. (When it isn't you have > merely moved ADFD transforms to the DFD.) Basically I am arguing that > (M) attributes characteristically have a non-trivial formula associated > with them that justifies their presence in the OIM; otherwise they > would violate normal form. This arguement is verging on the Clintonesque - its OK to break the rules provided its complex enough :-). The examples given in OOA'96 are the volume of a create and the density of a thing. I would regard these as fairly trivial; though not, admitedly, as trivial as an identity transform (which is actually a multiplexor). And, as you accept, OOA'96 says that its OK to break 4th normal form if you mark the attributes with (M). > Second, (M) attributes exist in the OIM only because there is a problem > space need for them. The "derived" attributes from simplifying the ADFD > are there purely to accommodate the DFD notation. That is, they are > relevant to the dynamic representation rather than the static > representation. Put another way, at the level of abstraction of the OIM > we don't care about manipulations at the ADFD level of abstraction. They question, then, is what do we care about in the static view. Your answer appears to be that we want a normalised data model (with a few exceptions). My answer would be that we care about the characteristics of the objects. (The difference is that you place more emphesis on what an object *isn't* -- 4th normal form: a characteristic *isn't* an attribute if it is also a characteristic of an non-identifying attribute) A few days ago you wrote about you dislike of responsibility driven modelling in the OIM. I, however, tend to adopt that philosophy. This may account for our different emphesis > Finally, I don't buy the associative object analogy. The associative > object is required by the relational data model on which the OIM is > based. But these new "derived" attributes are not necessary to the > relational data model and one could argue that by violating normal > form they are contrary to it. But why do we use the relational model. I can agree that the rules bring benefits; but, since we aren't developing a relational database, the rules don't seem to have an unshakable foundation. I could probably do a good job of playing devil's advocate against them; however, that should be left to another thread. I suspect that the benefits to OOA derive more from the fact that they are rules, than from what the rules actually say. In which case, a well-considered modification of the rules is not necessarily a bad thing. In fact, all I'm doing is adding some polish to that slippery slope introduced in OOA'96. > [...] BTW, I think I would be reluctant to move a lot of transforms, > other than simple conversions, to the DFD for this reason. When looking > at the ADFD I usually am interested in how the data is being manipulated > -- I just don't care about the mechanics of relationship navigation. I beleive that the change-concept should tell you what's happenning to data; or, if you're reading it, then the name of the attribute should tell you. As I said in my reply to Mike Finn, I like to view data transforms as something that "just happen" because data flows without any prompting. The job of the ADFD is to regulate the flow, not to be the mechanics of the flow. You should not view my proposal purely in terms of relationship navigation: that simplification is a consequence of abstracting the dataflow out of the ADFD, not a cause. > Aside from the OIM attribute issue, I only have two other reservations > that should be easily resolved with a couple of trials. The first is > that when reading the ADFD the actual access is obscured -- you might > have to go to the DFD to figure out where values were going/coming. In > your SMALLish notation you have already addressed this > (e.g., ~value > B.attr in your example) -- I am just not sure it would > always be that obvious. When I wrote that, you will recall that I said that I hadn't properly considered it. Having given it more thought, I think I would prefer to stick with the current notation. I.e. if you want to write to a remote object, then you navigate to it. The reason is that the notation you quote above has too many special cases where it doesn't apply: . It doesn't apply to SDFDs (they are not bound to an object). . It doesn't allow filtering. . The is no equivelent for read accesses. So navigations would still be needed, even for data access. So I'll stick with my original proposal: you only add a derived attribute if it is meaningful as a characteristic of the object you attach it to. I may allow you to hide some trivial derivations if they are neither read nor written by any accessor; but a accessor must be given an instance (or set of instances) and one or more attributes in that instance. The only change that I will propose for the ADFD is that a write accessor has an associated "change-concept". > The second reservation is that the DFD itself might get humongous since > it contains all access paths for all actions. Most of the coupling in > S-M domains comes from data accesses so I suspect the DFD may get > combinatorially large. There may be ways to handle this (e.g., multiple > entry points to relationship chains or shorthand notations for cases > where there is only one path). > > It is also possible that it isn't important. The DFD in this scheme is > really just a supporting document to capture navigation trivia and one > would probably not be looking at it with the Big Picture in mind. That > is, once it is created you might never look at it again until you change > a relationship or add an attribute. I think that I would go with the "It may not be important" option; but not for the reasons that you give. I beleive that the DFD is as important as the OCM (possibly slightly more so). However, it is likely to contain several disconnected graphs. These could be grouped (either in the graphical layout; or using a more formal categorization), so the size shouldn't be a problem. And, of course, managing a large diagram is a good way of understanding/managing the dependencies in the system. Dave. -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > This arguement is verging on the Clintonesque - its OK to break the > rules > provided its complex enough :-). True, but I think the distinction is binary rather than a gradational issue (i.e., any complexity vs. no complexity). > They question, then, is what do we care about in the static view. > Your answer appears to be that we want a normalised data model (with > a few exceptions). My answer would be that we care about the > characteristics of the objects. (The difference is that you place > more emphesis on what an object *isn't* -- 4th normal form: a > characteristic *isn't* an attribute if it is also a characteristic of > an non-identifying attribute) My basic issue is that the artificial derived attributes are not a relevant characteristic of the static OIM. They exist only to service a notational change to the dynamic description. At best they are merely a synonym for an attribute appropriately described in another object. > I suspect that the benefits to OOA derive more from the fact that > they are rules, than from what the rules actually say. In which > case, a well-considered modification of the rules is not > necessarily a bad thing. In fact, all I'm doing is adding some > polish to that slippery slope introduced in OOA'96. I agree that for truly derived attributes this is fine. But in those cases _all_ of the attributes are already described in the OIM as part of the problem space description. I just don't want the tail to wag the dog when you are simply cleaning up the ADFD navigations. > When I wrote that, you will recall that I said that I hadn't properly > considered it. Having given it more thought, I think I would prefer > to stick with the current notation. I.e. if you want to write to a > remote object, then you navigate to it. The reason is that the notation > you quote above has too many special cases where it doesn't apply: > > . It doesn't apply to SDFDs (they are not bound to an object). > . It doesn't allow filtering. > . The is no equivelent for read accesses. > > So navigations would still be needed, even for data access. But why does this preclude having that notation available when the special cases don't apply? > So I'll > stick with my original proposal: you only add a derived attribute if it > is meaningful as a characteristic of the object you attach it to. Now I am getting confused about our discussion above. I thought your original proposal was that you would add artificial "derived" attributes to an object in the OIM that corresponded to those situations where an action of that object accessed a remote attribute and you wanted to eliminate the ADFD's trivial navigation processes to reach that remote attribute. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com "Dana Simonson" writes to shlaer-mellor-users: -------------------------------------------------------------------- >>> Responding to Dave Whipp > The only change that I will propose for the ADFD is that a write accessor has an associated "change-concept". Why not paired read and write concepts? Then the same mechanism that allows localization of write access processing can be used for read access processing as well. <<<>>><<<>>><<<>>><<<>>><<<>>><<<>>><<<>>><<<>>> Dana Simonson Engineering Section Manager Transcrypt Operations - Waseca dsimonson@transcrypt.com www.transcrypt.com Clive Horn writes to shlaer-mellor-users: -------------------------------------------------------------------- Domain Analysis and Use Cases I have recently been involved in performing a domain analysis for a system. I had read in the Bridge Point Reference Manual (Release 3.2 Jan 1996 page 4) that a first cut domain chart could be put together in a concentrated day long meeting. I found that this exercise took several intensive days (with about four or five people). Admittedly some of the requirements for the system were unclear and took considerable legwork to get and document the answers. To identify our domain chart and the services that each domain supplied we used any tool that was available to us i.e object blitz, brain storming, event lists, review, requirements gathering, state machines, use cases, etc. In fact we used any method to help us move forward. I found the employment of use cases to be one of the most valuable tools. We were following the text written by Alistair Cockburn ?Goals and Use Cases? (JOOP Sept 1997) (http://members.aol.com/acockburn/papers/usecases.htm) and ?Using Goal-Based Use Cases? (JOOP Nov/Dec 1997). He describes a ship model in which the User Goals (this translates to system requirements to me) are above a water line and sub functions (domain services?) exist below the water line. Clearly the user goals can be developed independently of domain charts. As we go below the water line however I found that the steps within my use cases could be associated with particular domains. It strikes me therefore that a method of developing the domain chart might go as follows: 1 - Identify the user goals 2 - Produce a first cut domain chart with the mission of each domain and the services it supplies 3 - develop the sub functions (below the water line). 4 - Associate the steps within the use cases with the domains you have identified. 5 - Update the mission of each domain and the services they supply. 6 - repeat 3 - 6 until stability. Cheers Clive (New to the Users Group) Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- Dana Simonson wrote: > Responding to Dave Whipp > > > The only change that I will propose for the ADFD is that a write > > accessor has an associated "change-concept". > > Why not paired read and write concepts? Then the same mechanism that > allows localization of write access processing can be used for read > access processing as well. If you recall, the purpose of the "change-concept" is to sensitize the DFD to guide the propogation of a change. What would this sensitization mean for a read access? Dave. -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > Responding to Whipp... > > This arguement is verging on the Clintonesque - its OK to break the > > rules provided its complex enough :-). > > True, but I think the distinction is binary rather than a gradational issue > (i.e., any complexity vs. no complexity). But even navigation is "some" complexity. > My basic issue is that the artificial derived attributes are not a relevant > characteristic of the static OIM. They exist only to service a notational > change to the dynamic description. At best they are merely a synonym for an > attribute appropriately described in another object. So you are saying that they are characteristics of the objects (and, by implication, meaningful for that object); but they are irrelevant to the static information view. May I ask why you believe mathematical dependence is not a static characteristic? Also, I would re-phrase that last sentence as "At *worst*, they are merely a synonym ..." > I agree that for truly derived attributes this is fine. But in those > cases _all_ of the attributes are already described in the OIM as part > of the problem space description. I just don't want the tail to wag > the dog when you are simply cleaning up the ADFD navigations. (I'm actually cleaning up mathematical dependency, with the useful, and intentional, side effect that the ADFDs are simplified). When I clean up the ADFDs; all I am doing is moving stuff to a place what its easier to maintain. But it has to go somewhere. As I have said, I can see the case for hiding an intermediate node in a DFD if it is purely a stepping stone. Let me repeat an example from a few posts back: * movie_price-------->tape_price---+ (R1) | |(R2) | v duration_of_rental-------->multiply=>cost_of_rental (self) |* |(R3) | v sum_from_zero=>amount_owed This example has 3 derived attributes. Two of them have a non-trivial derivation function and one, tape_price, is simply a stepping stone (the movie object acts as a specification object for the tape object, on which a movie is recorded). Presumably, you feel that the tape price should be omitted from the OIM. Whilst I agree that this is possible, what damage does it do to the OIM if it is left in? Consider, that in a later version of the software, the concept of a damaged tape is introduced. A price reduction attribute is added to the tape object. This gives the tape price a non-trivial derivation (movie_price - damage_reduction); so it would now be shown on the OIM. Why does this change cause the tape price to become an interested characteristic? I also claim that that maintenance task is smaller if the tape price is already part of the OIM, than if it is not. > > When I wrote that, you will recall that I said that I hadn't properly > > considered it. Having given it more thought, I think I would prefer > > to stick with the current notation. I.e. if you want to write to a > > remote object, then you navigate to it. The reason is that the notation > > you quote above has too many special cases where it doesn't apply: > > > > . It doesn't apply to SDFDs (they are not bound to an object). > > . It doesn't allow filtering. > > . The is no equivelent for read accesses. > > > > So navigations would still be needed, even for data access. > > But why does this preclude having that notation available when the special > cases don't apply? I don't like special cases, especially not in the supposedly simple notation of Shlaer-Mellor. The "it doesn't allow filtering" is probably the worst special case, because it is easy to imagine a requirements change that would lead to the introduction of such filtering. > > So I'll > > stick with my original proposal: you only add a derived attribute if it > > is meaningful as a characteristic of the object you attach it to. > > Now I am getting confused about our discussion above. I thought your > original proposal was that you would add artificial "derived" attributes > to an object in the OIM that corresponded to those situations where an > action of that object accessed a remote attribute and you wanted to > eliminate the ADFD's trivial navigation processes to reach that remote > attribute. "Artificial" was your word. I replied that they are as artificial as the associative object on a M:M relationship; i.e. they aren't. I still believe that the "stepping-stone" attributes are meaningful. I am, almost, willing to concede that some may not be very relevant. As to what my original proposal was, I think its starting to get confused. I do see the derived attribute as a way of moving information to a place where it is needed, thus eliminating navigations. However, this should be done only when the information is a meaningful characteristic of an object that is closer to where the information is used. I beleive that this is usually the case when a navigation spans more than one relationship (and frequently when only a single relationship is navigated). However, the use of derived attributes in this way requires that they are properly formalised within the method. The proposal is nothing more than a formalisation of mathematically dependent attributes. Everything else we have discussed are the consequences of this. If anyone has any difficult cases where the proposed mechanism for derivation wouldn't work, please provide the example. I think I can cope with any derivation (and mutual dependence) that does not involve hysteresis or time-dependent behaviour. For example, I had to think a bit about the problem of setting the area of a rectangle while maintaining its aspect ratio (its solved by introducing an additional attribute for the aspect ratio). Dave. -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. "Dana Simonson" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Dave Whipp > Why not paired read and write concepts? Then the same mechanism that > allows localization of write access processing can be used for read > access processing as well. >> If you recall, the purpose of the "change-concept" is to sensitize >> the DFD to guide the propogation of a change. What would this >> sensitization mean for a read access? Same thing, soft of, reading an attribute will sensitize one or more DFD nodes such that the attribute being read gets updated. (Allows minimal processing if an attribute is written often and seldom read. Say I have 6 entangled attributes, two of which are updated every millisecond by an ISR. If two of the attributes are read on average once a minute, but need current values when read, it seems much better to calculate the values once a minute rather than 60,000 times per minute.) <<<>>><<<>>><<<>>><<<>>><<<>>><<<>>><<<>>><<<>>> Dana Simonson Engineering Section Manager Transcrypt Operations - Waseca dsimonson@transcrypt.com www.transcrypt.com Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- Dana Simonson wrote, Responding to Dave Whipp, responding to Dana >>> Why not paired read and write concepts? Then the same mechanism that >>> allows localization of write access processing can be used for read >>> access processing as well. >> If you recall, the purpose of the "change-concept" is to sensitize >> the DFD to guide the propogation of a change. What would this >> sensitization mean for a read access? > Same thing, soft of, reading an attribute will sensitize one or more DFD > nodes such that the attribute being read gets updated. (Allows minimal > processing if an attribute is written often and seldom read. Say I have > 6 entangled attributes, two of which are updated every millisecond by an > ISR. If two of the attributes are read on average once a minute, but need > current values when read, it seems much better to calculate the values > once a minute rather than 60,000 times per minute.) You are bringing in implementation considerations that are not appropriate. The DFD does not specify where or when things are executed. It just constrains the order of evaluation. The change-concept is, by definition, associated with a change. There is no way that the action that reads an attribute can know the intention of the previous writer (well, there is a way, but it would be an awful mess). One fact which I had considered implicit; but which should probably be stated explicity; is that a derivation function is "pure". Its result does not depend on the number of times it is executed. This means that there is no need to call it as part of the implementation of a write accessor. It can be called, zero or more times, as part of the read accessor's implementation (and you can use a cache, too, if you want). The entanglement causes a few problems for lazy evaluation, but there are no show-stoppers. If you want to provide a specific (but simple) example then I'll quite happily sketch out the pseudo-code of an implementation to meet the general shape of your timing constraints. Dave. -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. "Dana Simonson" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Dave Whipp > One fact which I had considered implicit; but which should probably > be stated explicitly; is that a derivation function is "pure". > Its result does not depend on the number of times it is executed. > This means that there is no need to call it as part of the > implementation of a write accessor. It can be called, zero or more > times, as part of the read accessor's implementation (and you can > use a cache, too, if you want). The entanglement causes a few > problems for lazy evaluation, but there are no show-stoppers. So, what your saying is that I can use a change concept even if I don't care what changed. Using the Mass-Density-Volume triad, let's say I have sensors that monitor all three values asynchronously and with different repetition rates. If I want the current volume, and don't care how it is determined since the actual change to the physical entity is outside the scope of my subject matter, then it is irrelevant whether the mass decreased or the density increased, all that matters is that the volume decreased. I would want the 'change-concept' to act only on the read side not the write side. This is acceptable, correct? <<<>>><<<>>><<<>>><<<>>><<<>>><<<>>><<<>>><<<>>> Dana Simonson Engineering Section Manager Transcrypt Operations - Waseca dsimonson@transcrypt.com www.transcrypt.com Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- Dana Simonson wrote: > > "Dana Simonson" writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Responding to Dave Whipp > > > One fact which I had considered implicit; but which should probably > > be stated explicitly; is that a derivation function is "pure". > > Its result does not depend on the number of times it is executed. > > This means that there is no need to call it as part of the > > implementation of a write accessor. It can be called, zero or more > > times, as part of the read accessor's implementation (and you can > > use a cache, too, if you want). The entanglement causes a few > > problems for lazy evaluation, but there are no show-stoppers. > > So, what your saying is that I can use a change concept even if I don't > care what changed. Using the Mass-Density-Volume triad, let's say I have > sensors that monitor all three values asynchronously and with different > repetition rates. If I want the current volume, and don't care how it is > determined since the actual change to the physical entity is outside the > scope of my subject matter, then it is irrelevant whether the mass > decreased or the density increased, all that matters is that the volume > decreased. I would want the 'change-concept' to act only on the read > side not the write side. This is acceptable, correct? I must appologise, I'm not entirely sure what you're asking. I'll do my best to reply though. The change-concept must be specified as part of a write access. It does not matter when it it used; but if it is not used immediately then architectural flags (or equiv.) will need to remember what it was. >From the point of view of a read, if you are reading the volume then you don't care about the history of the volume. You only want the current value. The *implementation* of the read accessor may care about the history; and may thus calculate the current value at that point. The only requirement is that, by the time you use a value from the read accessor, that the calculation has been performed according to the history of the attribute (and its dataflow graph). This could be taken to mean that the change concept could act on the read side, not the write side. But it must be specified as part of the write. Dave. -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. Daniel Dearing writes to shlaer-mellor-users: -------------------------------------------------------------------- Hi All, I apologise if I am deviating slightly from the type of discussion that typically appears on this email group, but I hope that some of you may be able to give me some advice. I have been asked to deliver a presentation to my colleagues at work about the benefits of Object Oriented Analysis and Design and why we should use it in our organisation. Whilst I have some training and experience in SM OOA/RD and UML/OMT, most of my colleagues do not. Some of them have used C++ and other OO languages and it seems like they all have their own favourite tales of how the compiler they used produced enormous code sizes and slow inefficient code. I can deliver my own perspective on the common problems or myths associated with OO development, but I would really appreciate other practitioner's comments on what can be some of the common issues and how to address these issues in my presentation. The presentation is on Wednesay 21st October. Many thanks in anticipation, Daniel Dearing "Stephen R. Tockey" writes to shlaer-mellor-users: -------------------------------------------------------------------- Daniel Dearing wrote: > I apologise if I am deviating slightly from the type of discussion that > typically appears on this email group, but I hope that some of you may > be able to give me some advice. This is a reasonable topic for this forum (IMHO). > I have been asked to deliver a presentation to my colleagues at work > about the benefits of Object Oriented Analysis and Design and why we > should use it in our organisation. Whilst I have some training and > experience in SM OOA/RD and UML/OMT, most of my colleagues do not. I have given similar presentations in the past. Here are some points that I think you ought to be sure to make: * The major source of customer-encountered software errors can be traced back to missing, conflicting, misunderstood, or mis- interpreted requirements [see Barry Boehm, "Software Design and Structuring" in Practical Strategies for Developing Large Software Systems, Ellis Horowitz ed., Addison-Wesley, 1975]. Boehm stated that about 80% of customer-encountered errors were this way. The whole point of OOA/OOD (or *A/*D) is to reduce missing, conflicting, misunderstood, and misinterpreted requirements. The root problem here is one of communication. Code is good for programmers to communicate to computers what they want done and how to do it, but it's not good for much else. * Methods can be categorized by the perspective they take on a system: information (data): like entity-relationship transformation (function): like data-flow (SA/SD) timing/sequence (state): like state-transition interaction (dialog): like use cases OOA/OOD methods can be viewed as a composite of the first 3. When combined with use case-like approaches, OOA/OOD captures most (if not all) system complexities in abstract models. * Some advantages of OOA/OOD: - Provides a high-level perspective that is difficult to extract from the code - Much easier to create and modify than the code - Much more effective at communicating with other people than the code - Allows earlier review and inspection (recall Boehm's order-of-magnitude increase in the cost to fix defects with each successive phase) - Extremely valuable when it comes to maintenance * Some disadvantages of OOA/OOD: - A given method may be too rigid - A given method may be applied too rigidly - It may not be the right method for a given system > Some > of them have used C++ and other OO languages and it seems like they all > have their own favourite tales of how the compiler they used produced > enormous code sizes and slow inefficient code. Be sure not to fall into a trap that OOA/OOD necessarily implies OOP. We have been very successful on a number of past projects to convert OOD to SD at the last minute and code in Fortran, C, and even Cobol. It's a fairly simple conversion, I can explain how to do it if you want. Be sure also to consider that any two different compilers for the same language can vary significantly in the size and speed characteristics of the output. Was the size and slowness inherent in the "OO-ness"? Was it a problem with the particular compiler? Was it just plain sloppy code? > I can deliver my own perspective on the common problems or myths > associated with OO development, but I would really appreciate other > practitioner's comments on what can be some of the common issues and how > to address these issues in my presentation. A bit of "audience analysis" on your part might help your cause immensely. Are they anti-OO or are they anti-any-analysis-or-design (or, are they just clueless about methods in general)? How much real experience have they had with OO stuff? Are they aware of how CASE tools fit into the picture? How effective is project management at your company? No amount of any analysis or design method will fix the ills of poor project management. You might also want to spend some time looking at "Technology Transfer" kinds of resources. For example, see: Raghavan and Chand, "Diffusing Software Engineering Methods", IEEE Software, July, 1989 Helen Rubenstein, "Getting from here to there: managing change", Object Magazine, September-October, 1992 Everett Rogers, Diffusion of Innovation, Fourth Ed, The Free Press, 1995 (this is THE classic work on the topic) > The presentation is on Wednesay 21st October. Best of luck, -- steve lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Horn... > I have recently been involved in performing a domain analysis for a > system. I had read in the Bridge Point Reference Manual (Release 3.2 Jan > 1996 page 4) that a first cut domain chart could be put together in a > concentrated day long meeting. > > I found that this exercise took several intensive days (with about four or > five people). Admittedly some of the requirements for the system were > unclear and took considerable legwork to get and document the > answers. To identify our domain chart and the services that each domain > supplied we used any tool that was available to us i.e object blitz, brain > storming, event lists, review, requirements gathering, state machines, > use cases, etc. In fact we used any method to help us move forward. I agree that for nontrivial applications (e.g., beyond the Garage Door Opener) should take awhile. (In fairness to the manual, though, it did say "first cut".) However, I am not enamored with use cases for defining the DC. To define a DC you need the following: (1) Identify "subject matters". As a first cut you can use some sort of functional decomposition to identify the major players (e.g., use cases) but there is a potential pitfall in that the resulting domains can have improper (i.e., mixed) levels of abstractions. Sometimes this is not evident until a lot of effort has been devoted to the individual domain analysis. Therefore we usually start by trying to identify large scale Things in the problem space and use them as the first cut at domains. (2) Identify bridges. Conventionally this involves identifying requirements that are passed down the DC. Very large scale use cases can be useful for this. Typically the domain mission statements get updated (refined) in the process because functionality is being allocated to the domains. This is also where missing domains are sometimes identified because the original mission statements have a level of abstraction that is inconsistent with the requirements flows (e.g., "uphill" arrows). This is an iterative process but the primary goal should be to ensure that the domain mission statements capture (a) major differences in What the domains are and (b) appropriate levels of abstraction for the positions in the DC. Though use cases are a handy tool, I would place my main emphasis on these goals. Typically the use cases we use for the DC are quite informal (i.e., we make them up at the whiteboard to double check the mission statements and bridge flows). >From a philosophical viewpoint, I tend to regard the DC as a requirements document because of the nature of bridge flows -- that is, one could do a DC directly from a Statement of Requirements. OTOH, I tend to associate use cases with Functional Specifications (in our shop a detailed user view of the system's black box functionality) since the use case describes how a user interacts with the system. Therefore I don't care for the responsibility based approaches for identifying domains or objects. So when we do domain analysis we initially try to identify the objects based upon problem space entities and concepts directly (i.e., again more based upon requirements than function). Once we have the basic problem space objects, we then use informal use cases to apportion functionality among those objects. These use cases are also a check step to ensure that the objects identified can handle the way that the user wants to use the system. The advantage of deferring the use of use cases is that one avoids the problem common to use case driven developments: objects with no data. Since S-M is a data driven methodology I think one should avoid a plethora of Controller, Manager, etc. type objects that are thinly veiled function libraries. Our main use of formal (i.e., detailed, persistent) use cases is for defining domain simulation and system integration test suites. We also use formal use cases to define the minimum set we need to actually implement to provide incremental development (i.e., developing only a subset of features for a particular release). They are extremely useful for doing this. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > > Why not paired read and write concepts? Then the same mechanism that > > allows localization of write access processing can be used for read > > access processing as well. > > If you recall, the purpose of the "change-concept" is to sensitize > the DFD to guide the propogation of a change. What would this > sensitization mean for a read access? Though we almost always want to ensure that the same instances are reached by traveling different paths in relationship loops, this is not necessarily so. Parent <-------> Car Parent<------->>Spoiled Teenager Spoiled Teenager<-----> Car If all of these relationships are "owns" on the right side, then one would not normally expect to reach the same Cars from Parent directly as through the Spoiled Teenager. Your change-concept could be used to distinguish which path one wants in a particular action. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Daniel Dearing writes to shlaer-mellor-users: -------------------------------------------------------------------- Stephen, Many thanks. You raised some interesting points which I hadn't already covered and some slightly different perspectives on some of the things I have covered. I will use some of it in my presentation undiluted if that's OK. Just the sort of thing I was looking for. Great cheers Dan :-) lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > So you are saying that they are characteristics of the objects (and, > by implication, meaningful for that object); but they are irrelevant > to the static information view. May I ask why you believe mathematical > dependence is not a static characteristic? Because I don't think the mathematical dependence exists in the problem space unless the attributes are truly derived. If all you are doing is replacing navigation in the ADFD, then the static model is adequately described by a single attribute in the target object. Creating a second attribute in the source object that is related to the first by identity has no relevance in the problem space. > (I'm actually cleaning up mathematical dependency, with the useful, > and intentional, side effect that the ADFDs are simplified). When I > clean up the ADFDs; all I am doing is moving stuff to a place what > its easier to maintain. But it has to go somewhere. As I have said, > I can see the case for hiding an intermediate node in a DFD if it is > purely a stepping stone. > > Let me repeat an example from a few posts back: > > * > movie_price-------->tape_price---+ > (R1) | > |(R2) > | > v > duration_of_rental-------->multiply=>cost_of_rental > (self) |* > |(R3) > | > v > sum_from_zero=>amount_owed > > This example has 3 derived attributes. Two of them have a non-trivial > derivation function and one, tape_price, is simply a stepping stone > (the movie object acts as a specification object for the tape object, > on which a movie is recorded). > > Presumably, you feel that the tape price should be omitted from the > OIM. Whilst I agree that this is possible, what damage does it do to > the OIM if it is left in? This is not an example of what I object to. This is a case of a true derived attribute -- amount_owed is not related to any of the other attributes by an identity relationship. Depending upon the application, I might or might not have a problem with all of these attributes being in the OIM. The situation where I do not see the "derived" attribute as justified occurs in the most common situation when you have no transforms and you are simply removing a set of relationship navigations from the ADFD. event_value > Aaction -------> B --------> C.attr R1 R2 In this case A's action takes an event value and assigns it to C.attr. I don't see creating a derived attribute in A simply to move the R1/R2 navigation to the DFD. Such an attribute would be completely artificial because it adds no information to the OIM that is not provided by C.attr. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Dearing... > I have been asked to deliver a presentation to my colleagues at work > about the benefits of Object Oriented Analysis and Design and why we > should use it in our organisation. Whilst I have some training and > experience in SM OOA/RD and UML/OMT, most of my colleagues do not. Some > of them have used C++ and other OO languages and it seems like they all > have their own favourite tales of how the compiler they used produced > enormous code sizes and slow inefficient code. FWIW, I have a hardcopy of a report we made evaluating S-M on our first pilot project several years ago. It was written with naive potential users in mind If you send me a mail address I can send you a copy, but it might not get there in time. Unfortunately that pilot was manually coded in straight C, so code bloat and performance were not problems. However, the fact that one can code directly in C for S-M represents an argument for that methodology. Many of the size/performance problems are really a problem for C++ rather than the methodologies. C++ provides nearly infinite ways to shoot yourself in the foot and it is not very scalable for large projects. John Lakos' book, "Large Scale C++ Software Design" is an excellent resource for these problems. Though C++ is the worst offender, the other OO languages have similar warts. [Note, though, that the current state of the art in commercial S-M code generators results in poor performance and large size because they don't optimize well.] I think the basic counter to the arguments you cite is: Don't Do That. Separate the language issues from the methodology issues. In the implementation don't do the fancy stuff (operator overloading, complex inheritance, polymorphic interfaces, object assignments, etc.) and take care with heap allocations/deallocations. If you do this, then your performance/size problems should not be serious. Since UML is driven by OOPL syntax, this means that you would have to restrict yourself to a subset of that notation. (Coincidentally that subset looks a lot like S-M. ) FWIW, our data indicates no difference in initial development time, a roughly 50% reduction in defects, and approaching an order of magnitude reduction in maintenance time using S-M over procedural techniques. Personally I am skeptical that these results could be achieved with other OO methodologies. The combination of domain firewalls and message based communications naturally provides an environment that simplifies maintenance and debugging. The levels of abstraction for OIM, STD, and ADFD are excellent for high level flow of control -- they come naturally in S-M but it takes a lot of experience to emulate them in other notations. The exclusive use of asynchronous FSMs also allows much easier and more extensive unit testing. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > > If you recall, the purpose of the "change-concept" is to sensitize > > the DFD to guide the propogation of a change. What would this > > sensitization mean for a read access? > > Though we almost always want to ensure that the same instances are reached > by traveling different paths in relationship loops, this is not necessarily > so. > > Parent <-------> Car > Parent<------->>Spoiled Teenager > Spoiled Teenager<-----> Car > > If all of these relationships are "owns" on the right side, then one would > not normally expect to reach the same Cars from Parent directly as through > the Spoiled Teenager. Your change-concept could be used to distinguish > which path one wants in a particular action. At least one of us is getting very confused (I know I am). The value of an attribute cannot have different values depending on how you view it. That would violate 1st (2nd?) normal form. If you want two different results then use two attributes. DFD sensitisation is concerned purely with controlling the propogation of changes. It is without meaning when applied to a read access. The very best you could hope for is a highly architecture-dependent concept which breaks lazy evaluation: this is an utterly useless ability for an OOA model. DFD sensitisation, even on a write access, cannot control how a relationship is navigated. All it can do is hold the output of a flow as constant (or not) for the duration of an update. If you want to multiplex dataflows, then define a multiplexor as a derivation function (it doesn't even require guards). Dave. -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > > Parent <-------> Car > > Parent<------->>Spoiled Teenager > > Spoiled Teenager<-----> Car > > > > If all of these relationships are "owns" on the right side, then one would > > not normally expect to reach the same Cars from Parent directly as through > > the Spoiled Teenager. Your change-concept could be used to distinguish > > which path one wants in a particular action. > > At least one of us is getting very confused (I know I am). The > value of an attribute cannot have different values depending on > how you view it. That would violate 1st (2nd?) normal form. > If you want two different results then use two attributes. The attribute value isn't changing with the view, but the instance containing it is. Suppose a given Parent, Igor, has one kid, Millard; Igor owns an orange Edsel; and Millard owns a puce Porsche. Suppose I am in Igor's instance's action and I want the value of the Car.color attribute of a Car instance. Which instance? That depends upon whether I want the Igor's car's color (orange) or Millard's car's color (puce). In that action I would navigate by different relationships depending upon which Car instance I wanted. I am just arguing that you could use a change-concept to designate which path you wanted to traverse in the DFD. > DFD sensitisation is concerned purely with controlling the > propogation of changes. It is without meaning when applied to > a read access. The very best you could hope for is a highly > architecture-dependent concept which breaks lazy evaluation: > this is an utterly useless ability for an OOA model. I am just broadening what you refer to as sensitisation (I think; see below). I thought that one of your original purposes was to simplify ADFDs by getting rid of the boilerplate identifier navigations. In my experience non-local read accessors outnumber non-local write accessors by a large margin; instances tend to update their own data by gathering non-local data and transforming it. For example, in your video example I would normally expect amount_owed to be calculated in an action of the object that contained amount_owed after accessing movie_price and duration_of_rental. [When I first saw the example I thought, why is this being calculated in the object with duration_of_rental? But it didn't seem important at the time, so I didn't ask.] If you do not extend that ADFD cleanup to read accessor navigations, then I don't see much cleanup occurring. That would seriously reduce the value in my view (perhaps because we have never had a really nasty problem with entangled attributes to date). > DFD sensitisation, even on a write access, cannot control how a > relationship is navigated. All it can do is hold the output of > a flow as constant (or not) for the duration of an update. If > you want to multiplex dataflows, then define a multiplexor as a > derivation function (it doesn't even require guards). Now I'm confused again. I thought the change-concept identified different flows in the DFD to be traversed (e.g., the flow with the appropriate guard, like [density]>mass). -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Daniel Dearing writes to shlaer-mellor-users: -------------------------------------------------------------------- Hi All, I just wanted to say thanks for all the advice and tips from everyone. I had a great response (especially from the team at Kennedy Carter). In return for your generosity, here are a few Object Oriented jokes. I hope you haven't heard them :-) Q: What's the difference between an object methodologist and a terrorist? A: You can negotiate with a terrorist ! Q: How many OO developers does it take to change a light bulb? A: None you just send a change lightbulb message to the socket object "This is an object-oriented system. If we change anything, the users object." Cheers, Dan :-) Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > Because I don't think the mathematical dependence exists in the problem space > unless the attributes are truly derived. If all you are doing is replacing > navigation in the ADFD, then the static model is adequately described by a single > attribute in the target object. Creating a second attribute in the source object > that is related to the first by identity has no relevance in the problem space. I argue that a derived attribute exists in the problem space if it is meaningful. The details of its relationship with other attributes do not determine its meaningfulness. >> [example snipped] > This is not an example of what I object to. This is a case of a true derived > attribute -- amount_owed is not related to any of the other attributes by an > identity relationship. Depending upon the application, I might or might not have > a problem with all of these attributes being in the OIM. > The situation where I do not see the "derived" attribute as justified occurs in > the most common situation when you have no transforms and you are simply removing > a set of relationship navigations from the ADFD. > > event_value > Aaction -------> B --------> C.attr > R1 R2 > > In this case A's action takes an event value and assigns it to C.attr. I don't > see creating a derived attribute in A simply to move the R1/R2 navigation to the > DFD. Such an attribute would be completely artificial because it adds no > information to the OIM that is not provided by C.attr. I think you've got this mixed up. C.attr is the derived attribute; A would have the concrete attribute. It is impossible to argue about meaningfulness with an abstract example. To determine if A.attr, or C.attr is meaningful requires some context. Even in your other example: > Parent <-------> Car > Parent<------->>Spoiled Teenager > Spoiled Teenager<-----> Car > > If all of these relationships are "owns" on the right side, then one would > not normally expect to reach the same Cars from Parent directly as through > the Spoiled Teenager. Your change-concept could be used to distinguish > which path one wants in a particular action. [... follow up post ...] > Suppose I am in Igor's instance's action and I want the value of the Car.color > attribute of a Car instance. Which instance? That depends upon whether I want > the Igor's car's color (orange) or Millard's car's color (puce). In that action > I would navigate by different relationships depending upon which Car instance I > wanted. I am just arguing that you could use a change-concept to designate which > path you wanted to traverse in the DFD. Even in this example, there is insufficient context to work out what the correct model would be. Why do you want to know the car colour? Why do you dome times want one; and at other times, the other. I cannot see why one read accessor (with an added "read-concept") would want to do both. Either use two different read accessors (to read different attributes) or navigate to the attribute that you want. The important thing, IMO is to keep everything meaningful: An analysis model that doesn't focus on "meaning" is not much of an analysis. (An additonal point: one parent has many kids according to your model; so you would require a filter to select just one - filtering, if you recal, was one of the reasons I gave for deciding against this usage of the DFD). > If you do not extend that ADFD cleanup to read accessor navigations, > then I don't see much cleanup occurring. That would seriously reduce > the value in my view (perhaps because we have never had a really > nasty problem with entangled attributes to date). Take another look at my video-shop example. You agreed that could be OK, depending on the application (though, on closer inspection of the details, it actually goes against UK trading laws). Consider the savings in the ADFDs from not having to put the three derived values, and two transforms, in the ADFDs. If you don't use (M) attributes at all, then all users of "amount owed" (probably more than one) would have to do the entire calculation! (if you use (M) attributes, as per OOA'96), then the savings are less). > > DFD sensitisation, even on a write access, cannot control how a > > relationship is navigated. All it can do is hold the output of > > a flow as constant (or not) for the duration of an update. If > > you want to multiplex dataflows, then define a multiplexor as a > > derivation function (it doesn't even require guards). > Now I'm confused again. I thought the change-concept identified > different flows in the DFD to be traversed (e.g., the flow with > the appropriate guard, like [density]>mass). Its a matter of viewpoint. I tend to think asynchronously, and in terms of fine grain parallelism (probably because I do a lot of hardware). I therefore think of the guards in terms of D-latchs; and derivations as continuous processes. Thus, for the M=DV case, if I alter the volume by adding more stuff: start: M=10, D=2, V=5 (continuously calculate all three, and they stary the same) now: V=10; sensitise path to M; all other flows are constant. So: V=10 (forced, disable recalculation) D= (both input flows held constant: V=5, M=10 M= (D=2 const; V=10 changed) after the recalculation: M=20, D=2, V=5 These continuous-time semantics are equivalent to the change-ripple I described in an earlier post. However, the change-ripple description is not my native thought pattern. Dave. -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > > event_value > Aaction -------> B --------> C.attr > > R1 R2 > > > > In this case A's action takes an event value and assigns it to C.attr. I don't > > see creating a derived attribute in A simply to move the R1/R2 navigation to the > > DFD. Such an attribute would be completely artificial because it adds no > > information to the OIM that is not provided by C.attr. > > I think you've got this mixed up. C.attr is the derived attribute; > A would have the concrete attribute. No, I don't think so. If I were writing the model today without your DFD solution, I would write the action as in the example. The only attribute in the OIM would be C.Attr, so it _has_ to be the concrete attribute as far as the problem space is concerned. I understand your argument that to make the DFD work it would be convenient to anchor the DFD to an attribute in A that is written, making C.attr derived. However, in my view this is the strongest argument yet for not placing A.attr in the OIM -- it artificially makes A.attr the concrete attribute when the problem space would have C.attr be the concrete attribute. > > Parent <-------> Car > > Parent<------->>Spoiled Teenager > > Spoiled Teenager<-----> Car > > > > If all of these relationships are "owns" on the right side, then one would > > not normally expect to reach the same Cars from Parent directly as through > > the Spoiled Teenager. Your change-concept could be used to distinguish > > which path one wants in a particular action. > > [... follow up post ...] > > > Suppose I am in Igor's instance's action and I want the value of the Car.color > > attribute of a Car instance. Which instance? That depends upon whether I want > > the Igor's car's color (orange) or Millard's car's color (puce). In that action > > I would navigate by different relationships depending upon which Car instance I > > wanted. I am just arguing that you could use a change-concept to designate which > > path you wanted to traverse in the DFD. > > Even in this example, there is insufficient context to work out what > the correct model would be. Why do you want to know the car colour? Why > do you dome times want one; and at other times, the other. I cannot see > why one read accessor (with an added "read-concept") would want to do > both. Either use two different read accessors (to read different > attributes) or navigate to the attribute that you want. The important > thing, IMO is to keep everything meaningful: An analysis model that > doesn't focus on "meaning" is not much of an analysis. I think the reason Why the action in Parent wants to look at a car color is not at all relevant; what matters is that Parent has _some_ reason to navigate to two different Car instances. But for the sake of argument I will postulate that in a Parent action one needs to know if any of the children's cars have the same color as the parent. When Parent = Igor, the action will navigate to Igor's Car instance (the orange Edsel) and to Millard's Car instance (the puce Porsche) and then compare the two values. In both cases the identical accessor is invoked (say, Car.get_color), but because the instances are different, a different attribute value will be returned. The read accessor does not "do both" -- it just returns the value of the Color attribute for the instance in hand. However, the action navigates to different instances using different paths. I argue that those different navigations could be placed in the DFD and the ADFD action could trigger the correct one in each case by specifying a change-concept. > (An additonal point: one parent has many kids according to your > model; so you would require a filter to select just one - filtering, > if you recal, was one of the reasons I gave for deciding against > this usage of the DFD). I don't think a filter is needed for the example; a simple iteration will do. But, this brings up another good point. How many attributes does one put in A? Suppose Igor has 22 children -- do you place 22 attributes in A? This is the same argument I used against transient attributes -- when navigation returns a set of values it would require a matching set of attributes. Or would you limit your DFD to navigations that only return a single value? > > If you do not extend that ADFD cleanup to read accessor navigations, > > then I don't see much cleanup occurring. That would seriously reduce > > the value in my view (perhaps because we have never had a really > > nasty problem with entangled attributes to date). > > Take another look at my video-shop example. You agreed that could be > OK, depending on the application (though, on closer inspection of the > details, it actually goes against UK trading laws). Consider the > savings in the ADFDs from not having to put the three derived values, > and two transforms, in the ADFDs. If you don't use (M) attributes at > all, then all users of "amount owed" (probably more than one) would > have to do the entire calculation! (if you use (M) attributes, as per > OOA'96), then the savings are less). But I also said that I didn't like the idea of plugging too many transforms into the DFD because it split the processing into two different diagrams. That would make it more difficult to "view" the processing algorithm. I probably would not put those transforms in the DFD in your video example because they are very likely crucial in the problem space space (with the possible exception of the relatively trivial tape_price calculation). At the ADFD level I want to see all the data manipulations, not just a few of them. In an action I want to know What I am doing to data, not How I get it. That is, when I write the action I assume that I resolved my referentials properly in the OIM and, therefore, I can get from Here to There consistently. (An occasional exception may require me to think about it as in my Car example, but even there I just want to point at the path rather than traverse it.) So I want to specify the data with minimal hassle and expend most of my limited attention span on what the processing is doing to that data. Where I see the value of your approach is in providing a shorthand just for the navigation chain. I would bet our non-local data accesses traverse at least two relationships on the average. That's a lot of boilerplate in the ADFDs that is distracting from what I care about -- the data manipulations. [I am biased in that we rarely ever have true derived attributes, so the elegance of your solution for the (M)s is of secondary concern.] -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > No, I don't think so. If I were writing the model today without your DFD > solution, I would write the action as in the example. The only attribute > in the OIM would be C.Attr, so it _has_ to be the concrete attribute as > far as the problem space is concerned. > > I understand your argument that to make the DFD work it would be > convenient to anchor the DFD to an attribute in A that is written, > making C.attr derived. However, in my view this is the strongest > argument yet for not placing A.attr in the OIM -- it artificially > makes A.attr the concrete attribute when the problem space would have > C.attr be the concrete attribute. You seem to be implying that a derived attribute is less related to the problem space than a concrete one. I disagree. An attribute is a meaningful, and relevant, characteristic of an object. Whether it gets updated in an ADFD, and SDFD or via derivation does not enter into the definition. If A.attr is not a meaningful characteristic of A, then it cannot be an attribute; and you must, instead, navigate to either B or C where there may be a meaningful attribute. (Yes, I am depricating 4th normal form from the identification of an attribute; but I'm not really going any further than OOA'96). > I think the reason Why the action in Parent wants to look at a car > color is not at all relevant; what matters is that Parent has _some_ > reason to navigate to two different Car instances. But for the sake > of argument I will postulate that in a Parent action one needs to know > if any of the children's cars have the same color as the parent. Good. In that case, I will postulate a two-valued attibute in the parent called "child_has_same_color_car". This is a derived attribute: * * car.colour------>teenager------->is_member=>parent.child_has_same_color_car | [r1] [r2] ^ |[r3] | *v | parent-----------------------------+ [self] The parent now has no reason to do any navigations. Instead, it can just read its attribute. This keeps the ADFD nice and simple. But is only tells you if *any* child has the same coloured car - not if a specific child does. If you want to know which child has the same colour car, then you need a different derived attribute. You'd need to put the predicate in each child; then the parent just checks the child to see if the predicate is true: * car.colour----->teenager----->is_member=>tenager.parent_has_same_color_car | [r1] [self] ^ |[r3] |* *v | parent---------------------------+ [r2] The ADFD won't be quite as simple (if its in the parent); but then, it's working with more-specific information. As these two possibilities show, knowing the reason for a navigation is of crutial importance when analysing the dataflow in the application. > I don't think a filter is needed for the example; a simple iteration > will do. But, this brings up another good point. How many attributes > does one put in A? Suppose Igor has 22 children -- do you place 22 > attributes in A? This is the same argument I used against transient > attributes -- when navigation returns a set of values it would require > a matching set of attributes. Or would you limit your DFD to > navigations that only return a single value? I think I've used the same argument - it breaks first normal form. However, as I showed in the example above, by considering the reason for the navigations it is possible to move the information closer to where it is needed without naively adding attributes everywhere. > But I also said that I didn't like the idea of plugging too many > transforms into the DFD because it split the processing into two > different diagrams. That would make it more difficult to "view" > the processing algorithm. I probably would not put those transforms > in the DFD in your video example because they are very likely crucial > in the problem space space (with the possible exception of the > relatively trivial tape_price calculation). At the ADFD level I want > to see all the data manipulations, not just a few of them. This is our primary point of difference. I want to remove most of the data manipulation from the state actions; and use them primarily for control-flow issues. To see how data is processed, I would use the DFD. So I am not splitting the data processing; I'm merging it into one (possibly partioned) diagram. Mike Finn may have had a point when he said that I was demoting to concept of an object; but I would prefer to say that I am focusing on object interaction, rather than demoting the object. The processing itself is still associated with the object; but the description of relationships is enhanced to show the data that flows across them. Most algorithms span multiple objects; so why should we artificially force bits of the algorithm into an object? > Where I see the value of your approach is in providing a shorthand > just for the navigation chain. I would bet our non-local data accesses > traverse at least two relationships on the average. That's a lot of > boilerplate in the ADFDs that is distracting from what I care about > -- the data manipulations. I think you are seeing a part of the same problem as I am; but I also see repeated data manipulations being placed in multiple state actions. The additional states are needed for flow-of-control reasons; and so the data manipulation must be repeated. My solution is to remove most of the data manipulation out of the ADFDs and into a new diagram which is unpolluted by control flow. Dave. -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... I think we are going in circles because there are three different issues and we are not coordinated about which one we are talking about in a given context. At I see it the issues are: (1) Movement of transforms to DFDs. (2) Use of DFDs to replace ADFD navigation boilerplate. (3) Introduction of "derived" attributes into the OIM. These are all related, but can be confusing when we are talking past one another about different issues. When you initially proposed the DFD solution you identified two goals: (a) cleaning up the (M) situation and (b) removing navigational information from the ADFDs. From my perspective (a) wasn't a big deal because we have few (M) attributes but (b) was a very big deal because most of our ADFD processes are identifier accesses for navigating relationships. What I missed until recently was your emphasis on (1). You see approaching the ADFDs with a different mindset where writes whose values are computed from other attributes can be placed in the DFD so that they are triggered by any write to the input attributes. This requires that the associated tests and transforms that compute the value to be written are also moved into the DFD. This means that what qualifies as a derived attribute is considerably expanded because much more complex processing can be applied than the simple, deterministic, static mathematical formula usually associated with (M) attributes. [It also implies that the way one thinks about and creates the models might be altered. More below.] Meanwhile I was getting all excited about (2). Unfortunately we did not mean the same thing when speaking of derived attributes (I only realized this when you said that only write accessors were involved). The only navigations you were removing from the ADFD were those associated with (1) but I wanted to remove the _all_ the navigations. (This is something I still want to do and extensions to the DFD solution to allow this are what my last couple of messages have been focused upon.) Finally, with your emphasis on (1) there really weren't any new derived attributes in the OIM -- the existing ones were simply viewed as derived when one of the existing input attributes was written. As a corollary you wanted transient values (e.g. tape_price) for the intermediate calculations to be attributes somewhere. However, from my perspective of doing (2) I could see no way to do this except by adding new, artificial derived attributes because I wasn't starting from a write to an existing attribute -- I was thinking in terms of a navigation to a read accessor. So we started going around about (3) -- we had completely different views of what a derived attribute was. So at this point I think (3) is a red herring. If the DFDs are restricted to serving (1), then it is probably not an issue. [I have a problem in principle with your model below, but that is a different issue.] If the DFDs are used to support wider use for (2), then I think we would both seek a different mechanism for anchoring the start of DFD processing than placing more attributes in the OIM. I am prepared to argue that (1) is less desirable than (2). I deal with the undesirability of (1) below, but here let me point out that (2) can be handled by the DFD notation more generally than just the pseudo-(M) situations. In the situations where I want to eliminate accessors from the ADFD I think your objections based upon filters, etc. are not relevant. The only processes I am removing are simple identifier accesses because the data I want is in an object that is 2-3 relationships away in the OIM. [For the moment assume we are talking about navigations that you would not move to the DFD because they are not (M)-like.] I see the DFD to be simply a place to put the details when I use a shorthand in the ADFD to identify the data and the path. In this situation I see no problem with anchoring the DFD string with a begin/end process: (get_color)<--(get_Car_ID)<--(get_TeenagerID) | R1 R2 | +-------------------------------->(end)---->color where the "end" process is a place holder identified in the ADFD syntax (possibly by a variation on the change-concept) and the output "color" maps into an ADFD variable (attribute, transient, or event data). > > I understand your argument that to make the DFD work it would be > > convenient to anchor the DFD to an attribute in A that is written, > > making C.attr derived. However, in my view this is the strongest > > argument yet for not placing A.attr in the OIM -- it artificially > > makes A.attr the concrete attribute when the problem space would have > > C.attr be the concrete attribute. > > You seem to be implying that a derived attribute is less related to > the problem space than a concrete one. I disagree. An attribute is > a meaningful, and relevant, characteristic of an object. Whether it > gets updated in an ADFD, and SDFD or via derivation does not enter > into the definition. If A.attr is not a meaningful characteristic > of A, then it cannot be an attribute; and you must, instead, navigate > to either B or C where there may be a meaningful attribute. I think this reflects your assumption of (1). That is, the DFD flows would not exist unless _both_ A.attr and C.attr already existed in the OIM -- you would not move the navigation flow to the DFD unless this was so. In my scenario of simply removing navigation accessors there would be no A.attr because the value to write to C.attr is on an event that happens to be addressed to A. That is, even though there was no justification for an A.attr when I created the OIM I still want to remove the ADFD navigation clutter in A's action. To do so under your restrictions would require creation of A.attr and then writing the event data to it, which I would regard as artificial and unnecessary. > Good. In that case, I will postulate a two-valued attibute in the > parent called "child_has_same_color_car". This is a derived attribute: > > * * > > car.colour------>teenager------->is_member=>parent.child_has_same_color_car > | [r1] [r2] ^ > |[r3] | > *v | > parent-----------------------------+ > [self] > > The parent now has no reason to do any navigations. Instead, it can > just read its attribute. This keeps the ADFD nice and simple. But > is only tells you if *any* child has the same coloured car - not if > a specific child does. You are changing both the OIM and the ADFD models to accommodate the use of the DFD notation. That is, if there were no DFD alternative it probably would not have been modeled this way. In particular, when the OIM was developed there would not have been an apparent need for child_has_same_color_car. The need only arose from a desire to simplify the ADFD, given the DFD feature. I still contend it is not an inherent characteristic of the problem space's static description. >From another perspective, in an ideal world one should not have to modify the OIM when working on STDs or ADFDs. The static model should stand when developing the dynamic models because it is a different abstraction. In practice this is an iterative process because people are not perfect and they make mistakes (or overlook the best solution). If you start modifying the OIM whenever the ADFD starts to get complicated, then even the ideal, theoretical situation becomes iterative because one can't accurately anticipate where the ADFD will get complicated when doing the OIM. From a purely philosophical view this bothers me. It also seems to break S-M's touted virtue of being able to determine when you are done (at least for the OIM). > This is our primary point of difference. I want to remove most of > the data manipulation from the state actions; and use them primarily > for control-flow issues. To see how data is processed, I would use > the DFD. So I am not splitting the data processing; I'm merging it > into one (possibly partioned) diagram. I agree it is a major point of difference. One of my problems is that events are generated from ADFD actions and often this is done conditionally. To follow the processing at the ADFD level one has to look at the events, the data that is tested, and how that data was manipulated. That should all be in one diagram. My second problem is that actions are executed in a time context dictated by the FSMs. The action ADFD encapsulates particular calculations within that context. However, with the DFD notation an entirely different timing paradigm is imposed that spans state actions. While I agree that what you propose is potentially a very elegant way for handling data updates, I am not convinced that it will be easy to verify that the two time contexts are compatible (i.e., that the STD events and the DFD triggers were always in synch). One could probably argue that the DFD is exactly the sort of thing the architecture has to deal with when maintaining data integrity, but I tend to worry when I don't see a rigorous way to reconcile complex algorithms that span time and space. I am also not convinced that there are all that many opportunities for doing only (1). Clearly this expanded derived write is more common than the traditional (M), but I can think of several situations where it would not be applicable as is. First, a lot of data accesses and calculations result in event data rather than attribute writes. Second, the algorithm resulting in a value being written often involves input values that are on events rather than all being attributes. Third, when multiple inputs are required to calculate a new attribute value, there is often a specific time when the calculation should be performed. That timing may depend upon factors that are not mappable into the DFD scheme (e.g., you might want to trigger the calculation when a particular event occurs rather than when the input attributes are written). -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > Responding to Whipp... > > I think we are going in circles because there are three different issues and we > are not coordinated about which one we are talking about in a given context. I agree - its a common failing of email debates. > At I see it the issues are: > > (1) Movement of transforms to DFDs. > > (2) Use of DFDs to replace ADFD navigation boilerplate. > > (3) Introduction of "derived" attributes into the OIM. > > These are all related, but can be confusing when we are talking past one > another about different issues. > > When you initially proposed the DFD solution you identified two goals: (a) > cleaning up the (M) situation and (b) removing navigational information from > the ADFDs. From my perspective (a) wasn't a big deal because we have few (M) > attributes but (b) was a very big deal because most of our ADFD processes are > identifier accesses for navigating relationships. What I missed until > recently was your emphasis on (1). Let me try and explain ho I would see the relationship between them. Firstly, (3) has bothered me since (M) attributes were first introduced. But lets ignore that for a moment. The thing that bothered more more recently was the 1:M relationship between changes inthe OIM and changes in many ADFDs. This I identified as being due to the dependencies between them; and I expressed this in my initial post as (2) - though I didn't use the word "boilerplate". But the real problem is sightly deeper than just (2). Repeated actions in ADFDs also includes data transforms and tests. I.e. information is required by a state action for a reason (If that reason appears in multiple states, then the navigations do too). Its not just the navigations that must be abstracted; its the whole datapath. Transforms, and the front-end of tests, naturally map to derived attributes. So (1) is the means to achieve (2); and (3) is the means to achieve (1). And (3) is desirable anyway, even without (1) and (2). And, it is trivial to show that (1) and (2) are a natural consequence of (3). The mechanics of my proposal, therefore, only address (3). > You see approaching the ADFDs with a different mindset where writes > whose values are computed from other attributes can be placed in the > DFD so that they are triggered by any write to the input attributes. > This requires that the associated tests and transforms that compute > the value to be written are also moved into the DFD. The point about the mindset is very important. It will come up a few times in this reply. I do not see that test processes are used to compute the values of derived attributes; although some of the processing that is currently associated with a test can be moved into a derivation on the DFD. The outputs of a test processes is are control flows, which can only appear on an ADFD or SDFD. The algorithm used to compute a derivation may include conditionals; but they are test processes. For example, an "abs" transform might be defined as "x < 0 ? -x : x"; the test here is part of the algorithm, not the control flow of the domain. > This means that what qualifies as a derived attribute is considerably > expanded because much more complex processing can be applied than the > simple, deterministic, static mathematical formula usually associated > with (M) attributes. Incorrect, I do not propose any non-deterministic maths. As I clarfied in an earlier post, I see derivations as "pure" functions. I am not sure what a "static mathematical formula" really means: even long division is an algorithm. To quote OOA96 (bottom of page 8): "In the description of an attribute [...], cite the formula *or algorithm* used to determine the value of the attribute" (emphasis mine). So even OOA'96 allows algorithmic derivation. > [It also implies that the way one thinks about and creates > the models might be altered. More below.] This I do agree with; and may be useful in working round some of your concerns. > Meanwhile I was getting all excited about (2). Unfortunately we did not mean > the same thing when speaking of derived attributes (I only realized this when > you said that only write accessors were involved). The only navigations you > were removing from the ADFD were those associated with (1) but I wanted to > remove the _all_ the navigations. (This is something I still want to do and > extensions to the DFD solution to allow this are what my last couple of > messages have been focused upon.) Yes, I tend to accept that some navigation will always be required, because the OIM is in 1st normal form. And I don't mind navigating to neighbour to get a pre-calculated value; the dependencies introduced by doing this are quite localised. Its is, I think, obvious that the dependencies increase non-linearly with the distance navigated (probably polynomial, not exponential; I'll let someone else do the math). Its probably worth emphasising my be concern is the dpendency, not the navigation itself. You seem to be more worried about the boiler plate nature of the naviagtion. This, for me, is a secondary issue: I can always generate boiler-plate with a script. > Finally, with your emphasis on (1) there really weren't any new derived > attributes in the OIM -- the existing ones were simply viewed as derived > when one of the existing input attributes was written. Partly true; however, if the output of a transform does not flow to a write accessor then a derived attribute is introduced. > As a corollary you wanted > transient values (e.g. tape_price) for the intermediate calculations to be > attributes somewhere. However, from my perspective of doing (2) I could see > no way to do this except by adding new, artificial derived attributes because > I wasn't starting from a write to an existing attribute -- I was thinking in > terms of a navigation to a read accessor. So we started going around about > (3) -- we had completely different views of what a derived attribute was. I have now realised that intermediate attributes may, in some cases, break 1'st normal form. So I'll drop any insistance on forcing them to be attributes on the OIM. They are analagous to transient attributes in ADFDs. But some intermediate attributes (such as tape_price) are meaningful and singular. These could appear on the OIM (That's part of analysis). > I am prepared to argue that (1) is less desirable than (2). I deal with the > undesirability of (1) below, but here let me point out that (2) can be handled > by the DFD notation more generally than just the pseudo-(M) situations. I agree with this statement; but it is my belief that (3) is desirable in its own right. (1) and (2) are inevitable consequences of (3) - the only way to prevent this consequence would be a rule in the method that say's "don't do this" - and such a rule would need to be justified. > In the > situations where I want to eliminate accessors from the ADFD I think your > objections based upon filters, etc. are not relevant. The only processes I am > removing are simple identifier accesses because the data I want is in an object > that is 2-3 relationships away in the OIM. [For the moment assume we are > talking about navigations that you would not move to the DFD because they are > not (M)-like.] If you use a different mechanism then filtering; and navigations in SDFDs or Assigner ADFDs; may not be important. But a general purpose solution to pure-(2) would have to address these issues. > I see the DFD to be simply a place to put the details when I use a shorthand in > the ADFD to identify the data and the path. In this situation I see no problem > with anchoring the DFD string with a begin/end process: > > (get_color)<--(get_Car_ID)<--(get_TeenagerID) > | R1 R2 > | > +-------------------------------->(end)---->color > > where the "end" process is a place holder identified in the ADFD syntax > (possibly by a variation on the change-concept) and the output "color" maps > into an ADFD variable (attribute, transient, or event data). This looks to me like a macro (defined graphically). For my objections to these, see my initial post in the thread. Your proposal does not address (3). I beleive that a rigorous definiton of mathematical dependence is useful in its own right. It represents a tightning of the method, with a few additional objects in the OOA-of-OOA. The "change-concept" is a result of allowing mutually dependent attributes; and transient dataflows are needed to get data to the derivation. (M) attributes are already part of the method so I am not adding anything new. All the concepts of my proposal will already be found in the "attribute description" of (M) attributes. But my slight tightning of the method has very powerful consequences, if exploited to its fullest extent. I beleive that such exploitation goes a long way towards solving issue (2); and that the remaining relationship navigations in the ADFDs are acceptable. I know you do not feel (3) is particularly important; but you have expressed the opinion that my proposal does elegantly solve it. If a solution of (3) is included in the method, then any additional proposal you have must exhibit significant benefits over the consequences of this solution. I am not convinced that your additions do offer significant benefits. (but feel free to try and persude me: but make it clear whether you would attempt to address (3) in a final proposal). >[...] > I think this reflects your assumption of (1). That is, the DFD flows > would not exist unless _both_ A.attr and C.attr already existed in the > OIM -- you would not move the navigation flow to the DFD unless this > was so. Yes, but remember that I am agressively exlpoiting of the concept of derived attributes. i.e. it is more likely (than at present) that both these attributes would be in the OIM. > In my scenario of simply removing navigation accessors there would be > no A.attr because the value to write to C.attr is on an event that > happens to be addressed to A. That is, even though there was no > justification for an A.attr when I created the OIM I still want to > remove the ADFD navigation clutter in A's action. To do so under > your restrictions would require creation of A.attr and then writing > the event data to it, which I would regard as artificial and unnecessary. As we noted above, to take advantage of derived attributes in the way that I suggest does require a slight shift in the mindset for creating models. Also, I have a bit more to say about event data at the end of this post. > You are changing both the OIM and the ADFD models to accommodate the > use of the DFD notation. That is, if there were no DFD alternative > it probably would not have been modeled this way. Absolutely true. > In particular, when the OIM was developed there > would not have been an apparent need for child_has_same_color_car. The need > only arose from a desire to simplify the ADFD, given the DFD feature. I still > contend it is not an inherent characteristic of the problem space's static > description. Ah, but this is where the "mindset shift" comes in. If you think about the datapath when creating the OIM then the attribute probably would be part of the OIM. In this new mindset, you don't create the DFD to simplify ADFDs: you create the OIM and DFD and; as a natural consequence; when you build be ADFDs, they are simple. > [...] If you start modifying the OIM > whenever the ADFD starts to get complicated, then even the ideal, > theoretical situation becomes iterative because one can't accurately > anticipate where the ADFD will get complicated when doing the OIM. > From a purely philosophical view this bothers me. It also seems to > break S-M's touted virtue of being able to determine when you are > done (at least for the OIM). I'll repeat my previous paragraph. The rules of the method, adjusted as a result of DFDs, will allow the IOM/DFD pair to be created in such a way that the ADFDs are naturally simple. I disagree with your assumption that the properties of the OIM+DFD do not allow the complexity of the ADFDs to be predicted (controlled). I should restate, at this point, that I view the datapath as part of the static model, not the dynamic model (though I am not sure precisely what the distinction is - I think its the split between the description of the information and the description of the behaviour). > I agree it is a major point of difference. One of my problems is that > events are generated from ADFD actions and often this is done conditionally. > To follow the processing at the ADFD level one has to look at the events, > the data that is tested, and how that data was manipulated. That should > all be in one diagram. Conditionality within an ADFD is not a problem. Going back to your car example, you might have an ADFD with fetches (navigates to) the parent's car clours and the children's car colours; and then uses a test process "parent has same colour car as child?". In my alternative, I created a predicate attribute (parent.child_has_same_colour_car), which I can use in the ADFD. So the ADFD contains a read accessor for that attribute which then flows into the test process (which is now a simple predicate test). The name of the attibute is meaningful, so its exact derivation (if its derived) is not important in the ADFD. I now see another possible side effect of the proposal: all data processing is associated with transforms; so test processes become simple comparisons. I'm not sure if the simplification is universal; but it would be nice if it is. > My second problem is that actions are executed in a time context > dictated by the FSMs. The action ADFD encapsulates particular calculations > within that context. However, with the DFD notation an entirely different > timing paradigm is imposed that spans state actions. While I agree that > what you propose is potentially a very elegant way for handling data > updates, I am not convinced that it will be easy to verify that the two > time contexts are compatible I am not proposing any time rules for the DFD. It is a static description of the data path. When the value of an attribute is used in a way that an external system can percieve then its value must be consistent with the DFD. (In practice, this means that the read accessor must retrive a value consitent with the DFD). The timing issue is slightly more complicated than the current situation; but not significantly. The requirement on the architecture is that an ADFD receives a consitent dataset. The only complication is that there may now be some processing between the write accessor and the read accessor. One simple way for an architecture to resolve this is to associate all processing for the DFD with either an read accessor or a write accessor: thus encapsulating all processing within the time-scope of a state action. > I am also not convinced that there are all that many opportunities for doing > only (1). Clearly this expanded derived write is more common than the > traditional (M), but I can think of several situations where it would not be > applicable as is. First, a lot of data accesses and calculations result in > event data rather than attribute writes. I tend not to send data on events, so the problem of events is less apparent (to me). This style may become prevailent if the DFD provides a solid data transport mechanism. Why bother to send the data on an event if its available as a local attribute anyway?. (Simplifying events has useful implications for the architecture; but I shouldn't get into implementation issues here). > Second, the algorithm resulting in a > value being written often involves input values that are on events > rather than all being attributes. Ditto. > Third, when multiple inputs are required to calculate a new attribute > value, there is often a specific time when the calculation should > be performed. That timing may depend upon factors that are not mappable > into the DFD scheme (e.g., you might want to trigger the calculation > when a particular event occurs rather than when the input attributes > are written). This is only true if calculations have side effects. This is expressly prohibitted by my definiton of a derivation as a "pure" function. The only reason for requiring a specific timing of the calculation would be for implementation reasons. These would be expressed as colorations. Dave. -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... There are some details to quibble about, but I think we are basically down to one issue: whether the new mindset can be used in enough situations. I agree that it could be used widely, but I still think it would be worth extending it to allow simple navigation replacement (e.g., for remote reads). > > (1) Movement of transforms to DFDs. > > > > (2) Use of DFDs to replace ADFD navigation boilerplate. > > > > (3) Introduction of "derived" attributes into the OIM. > > > > The thing that bothered more more recently was the 1:M relationship > between changes inthe OIM and changes in many ADFDs. This I identified > as being due to the dependencies between them; and I expressed this in > my initial post as (2) - though I didn't use the word "boilerplate". > > But the real problem is sightly deeper than just (2). Repeated actions > in ADFDs also includes data transforms and tests. I.e. information > is required by a state action for a reason (If that reason appears > in multiple states, then the navigations do too). Its not just > the navigations that must be abstracted; its the whole datapath. > Transforms, and the front-end of tests, naturally map to derived > attributes. I have to agree that this is a strong argument for (1). But I also think I want to have the context for conditional event generation in one place. > I do not see that test processes are used to compute the values > of derived attributes; although some of the processing that is > currently associated with a test can be moved into a derivation > on the DFD. The outputs of a test processes is are control flows, > which can only appear on an ADFD or SDFD. The algorithm used > to compute a derivation may include conditionals; but they are > test processes. For example, an "abs" transform might be defined > as "x < 0 ? -x : x"; the test here is part of the algorithm, > not the control flow of the domain. I would not regard a test as affecting the domain flow of control unless an event was conditionally generated as a result. However, you have a point that DFDs do not support test control flows per se. I see this as another restriction that would limit the utility. > Incorrect, I do not propose any non-deterministic maths. As I clarfied > in an earlier post, I see derivations as "pure" functions. I am not > sure what a "static mathematical formula" really means: even long > division is an algorithm. To quote OOA96 (bottom of page 8): "In the > description of an attribute [...], cite the formula *or algorithm* > used to determine the value of the attribute" (emphasis mine). So > even OOA'96 allows algorithmic derivation. I meant that the results can be dependent upon run time conditions. For example, quicksort can be a static, deterministic algorithm -- if you define an array of values, then the resulting ordered sequence will always be the same. That result will be the same regardless of the initial order of the values or the quicksort implementation. However, if I use quicksort to sort set of strings based upon the first character, then the resulting sequence of strings will not be statically deterministic if there are duplicate first letters because it will be dependent upon the quicksort implementation and the order in which the strings were presented to the algorithm during execution. It will only be statically deterministic relative to the order of the first letters of the strings, not to the strings themselves. The (M) attribute algorithm is restricted to situations where the algorithm and the values are sufficient to determine the outcome, including side effects, without actual execution. > Its probably worth emphasising my be concern is the dpendency, not the navigation > itself. You seem to be more worried about the boiler plate > nature of the naviagtion. This, for me, is a secondary issue: I can > always generate boiler-plate with a script. Not in our CASE tool. It is true I am more annoyed by the wordiness and distraction, but I also care about the double edits. Really. > > (get_color)<--(get_Car_ID)<--(get_TeenagerID) > > | R1 R2 > > | > > +-------------------------------->(end)---->color > > > This looks to me like a macro (defined graphically). For my objections > to these, see my initial post in the thread. I found the objection to object synchronous services, but not macros. I even searched the first few messages for "macro". So I guess you'll have to refresh my memory. I agree this could be viewed as a kind of macro, but I don't see harm in it. Because it uses the DFD formalism it is controlled, rigorous, visible, and defined in only one place. If there is any weakness, I would think it would lie in the link between DFD and ADFD. > Your proposal does not address (3). I beleive that a rigorous definiton > of mathematical dependence is useful in its own right. It represents a > tightning of the method, with a few additional objects in the > OOA-of-OOA. > The "change-concept" is a result of allowing mutually dependent > attributes; > and transient dataflows are needed to get data to the derivation. It addresses (3) to the extent that it demonstrates that adding an attribute to the OIM is not necessary in some situations (e.g., a remote read access, assuming these are still necessary). If one can provide a reasonable ADFD syntax for the link, then I don't see that this is significantly looser than forcing "color" to be an attribute. I would argue that since one of S-M's claims to fame is that it is unambiguous, then if the notation is unambiguous it is sufficient for the goals of the methodology. > (M) attributes are already part of the method so I am not adding > anything new. All the concepts of my proposal will already be found > in the "attribute description" of (M) attributes. True. But I am proposing that with the introduction of a special accessor your DFD idea can be extended to eliminate a lot more duplication in the ADFD. There are already 11 accessors, so one more does not seem to be a large price to pay for the benefit. B-) Of course this depends upon how effective the new mindset is in eliminating the need for remote navigations. > Ah, but this is where the "mindset shift" comes in. If you think about > the datapath when creating the OIM then the attribute probably would > be part of the OIM. In this new mindset, you don't create the DFD to > simplify ADFDs: you create the OIM and DFD and; as a natural > consequence; when you build be ADFDs, they are simple. I understand the mindset issue. My problem is that I don't think you know all the data flows when you create the OIM. You will know the traditional (M)-like ones because there will be a nice, crisp formula in the problem space that advertises the data flow. But the domain solution is one mongo algorithm where events and data updates are intertwined. That solution only starts to become clear when functionality is allocated to active objects and FSMs are built. Only when you have the action descriptions will you know when and where the updates are done and what the consistency issues are. At that point you could probably do a pretty good job on the DFD. But then I think you would be in the position of backfilling the OIM to make the DFD work right. (see your example immediately below) > Conditionality within an ADFD is not a problem. Going back to your > car example, you might have an ADFD with fetches (navigates to) the > parent's car clours and the children's car colours; and then uses a > test process "parent has same colour car as child?". In my alternative, > I created a predicate attribute (parent.child_has_same_colour_car), > which I can use in the ADFD. So the ADFD contains a read accessor for > that attribute which then flows into the test process (which is now > a simple predicate test). The name of the attibute is meaningful, > so its exact derivation (if its derived) is not important in the > ADFD. Agreed, but this backs up the point above that you could not have known about the need for Parent.child_has_same_color_car when you did the OIM and/or DFD. This is a result of where you decided to answer the question, which you would not know until you make the state models. If you decided to answer the question in Spoiled Teenager, then the attribute would have to go there rather than in Parent. (Admittedly, it would be easy to change but my point is that the OIM is getting fixed after it is "completed" -- the methodology's claim that you know when you are done with a model is broken.) > I am not proposing any time rules for the DFD. It is a static > description > of the data path. When the value of an attribute is used in a way that > an external system can percieve then its value must be consistent with > the DFD. (In practice, this means that the read accessor must retrive > a value consitent with the DFD). The timing issue is slightly more > complicated than the current situation; but not significantly. > > The requirement on the architecture is that an ADFD receives a consitent > dataset. The only complication is that there may now be some processing > between the write accessor and the read accessor. One simple way for an > architecture to resolve this is to associate all processing for the DFD > with either an read accessor or a write accessor: thus encapsulating > all processing within the time-scope of a state action. The DFD timing mode I referred to is when the traversal is triggered. That can be when an input attribute is written or when the derived attribute is read. I don't think the architecture can make this decision arbitrarily; it will affect the way the analyst models just as the rules from processing events off the queue affect the modeling. My intuition says that if the DFD flows are triggered when the derived attribute is read, the model's karma will be in adjustment most of the time. However, the thread earlier in the year about (M) attributes suggests that this can be a slippery slope. Example: D and V are changed externally always in pairs and the domain needs a consistent M with the current pair but the D and V changes are on separate bridge events. If M is read between the arrivals of a D and V pair, this would not work correctly regardless of the DFD time model. I can work around it in several ways, but the point I was making is that having a DFD span state actions introduces a new level of complexity to the analysis. > > First, a lot of data accesses and calculations result in > > event data rather than attribute writes. > > I tend not to send data on events, so the problem of events is less > apparent (to me). This style may become prevailent if the DFD provides > a solid data transport mechanism. Why bother to send the data on an > event if its available as a local attribute anyway?. If you are not certain when the event will be processed, then for consistency sake you have to pass the data on the event lest the data be updated in an inconsistent manner before the event is processed. Not terribly common, perhaps, but not rare either. Another variant is the wormhole, where you don't have a choice. > > Third, when multiple inputs are required to calculate a new attribute > > value, there is often a specific time when the calculation should > > be performed. That timing may depend upon factors that are not mappable > > into the DFD scheme (e.g., you might want to trigger the calculation > > when a particular event occurs rather than when the input attributes > > are written). > > This is only true if calculations have side effects. This is expressly > prohibitted by my definiton of a derivation as a "pure" function. The > only reason for requiring a specific timing of the calculation would > be for implementation reasons. These would be expressed as colorations. I believe there are common situations where this control is necessary in the problem space. Any time you are using the asynchronous model and you have to transfer control via an event you have the potential for a data consistency problem if the input attributes are being updated asynchronously. When it occurs this is an analysis issue, not an implementation issue. I see it occurring whenever object A contains the input data, object B has the derived attribute, and object C knows when it is appropriate to make the calculation. Imagine A collects multiple samples from hardware on a periodic basis, B holds an average of a consistent set, C controls updating the average based upon a synch signal, and the bridge randomly reads the average from B. It is up to the analyst to make sure the bridge gets consistent average values relative to the synch rather than the current hardware samples. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com smf@cix.compulink.co.uk (Mike Finn) writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding very late to Dave Whipp... > > I agree that SM-ers should not ignore the maintainence issue and > > think the idea of "One fact in one place" should be the guiding > > factor. We just need to agree on the facts. :-) > One fact in one place is derived from the more fundamental rule: > one change in one place. I'm not sure I agree that this is a more fundamental rule. You have introduced a dynamic element which the original idea did not have. > If one fact in two place can be kept consistant through mechanisms > in the formalism then the decoupling that the redundancy provides > may be beneficial, if used correctly. In an abstract model, it is very hard to argue the case for any sort of rundundancy. Keeping one fact in two places consistant requires a more complex formalism. The main reason for introducing redundancy into a system is in order to increase its performance. I can't see that issue being applicable here. > > There seems to be an implicit connection in the OOA96 Report between > > mathematically dependent attributes and transient data items. > I see no such connection. All OOA'96 says is that, when considering > the data as objects; and not as relationships in normal form; then > it is common to discover properties of objects which exhibit > mathematical dependence. In many cases this dependency may break > 4th normal form. I think you may have missed my point (see below). > The tagging of an attribute with (M) is pretty much arbitrary in > the case of mutually dependent characteristics (i.e. any > rearrangeable formulae). The examples given are the mass,volume, > density example that I have used; and a simple crate with width, > height, depth and volume. I don't believe it's as arbitrary as is made out. When modelling a domain the analyst adopts a point of view generally determined by the requirements. If part of the purpose of the domain is to ascertain the volume of a crate from the width, height and depth, then the width, height and depth characteristics should be identified as necessary to fulfil the purpose (finding the volume). A model always has data that is available to it, data that it needs for later and data that it produces (the reason for its existance). Therefore, it's the resulting volume attribute that should be tagged with a (M). However, if the determined volume data item flows out of the domain immediately I see no good reason why it should appear on the OIM as an attribute. Even if the crates volume is required later it should still not appear on the OIM, since it can be recalculated. It's interesting to speculate about what exactly can be measured from Things in the Real World and what *must* be calculated. For example, is it possible to directly measure the density of a crate? > Nowhere is anything mentioned about transient attributes. There is nothing mentioned about Transient Attributes because I invented the term! I said: smf> I use the name Transient Attribute for data items that appear smf> on event flows but do not appear on the OIM. Transient Attributes are different to Transient Data Items. See the OOA96 Report (page 45) which basically says: All Transient Data Items appearing with an event or on data flows must appear as attributes on the OIM. > > Since > > transient data items must now appear as attributes on the OIM you > > can just put them in as mathematically dependent attributes. That > > solves the perceived problem regarding their type declaration in the > > OOA. Unfortunately, this *solution* degrades the OIM. > Transient attributes are different to derived attributes (even the > intermediate nodes on my DFD). Yes. > A derived attribute only has a value during the life of an action. What!? I thought your *derived attribute* was just another name for a Mathematically Dependent Attribute which has a value during the life of an object. If I'm wrong, could you define it again? Mike -- Mike Finn Dark Matter | Email: smf@cix.co.uk Systems Ltd | Voice: +44 (0) 1483 755145 Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- Mike Finn wrote: > [one fact in one place Vs. one change in one place] > I'm not sure I agree that this is a more fundamental rule. You have > introduced a dynamic element which the original idea did not have. The original rule is derived from database theory. I do not beleive we should assume the its justification is valid for SM. I usually use the 1-fact rule in the context of maintenance - for which the 1-change rule is more fundamental. I'll agree that the original database derivation was more concerned with data integrety. > > If one fact in two place can be kept consistant through mechanisms > > in the formalism then the decoupling that the redundancy provides > > may be beneficial, if used correctly. > > In an abstract model, it is very hard to argue the case for any sort > of rundundancy. Keeping one fact in two places consistant requires > a more complex formalism. You are probably right; my comment was not particularly well thought through. However, lets get back on topic: where is the redundancy in my proposal? > > The tagging of an attribute with (M) is pretty much arbitrary in > > the case of mutually dependent characteristics (i.e. any > > rearrangeable formulae). The examples given are the mass,volume, > > density example that I have used; and a simple crate with width, > > height, depth and volume. > > I don't believe it's as arbitrary as is made out. When modelling a > domain the analyst adopts a point of view generally determined by > the requirements. If part of the purpose of the domain is to > ascertain the volume of a crate from the width, height and depth, > then the width, height and depth characteristics should be > identified as necessary to fulfil the purpose (finding the volume). This is true, as far as it goes. However, another part of the domain's purpose may require the height to be derived (e.g. "I need a 5 cubic metre container with a standard XYZ footprint") > [...] > However, if the determined volume data item flows out of the domain > immediately I see no good reason why it should appear on the OIM as > an attribute. Even if the crates volume is required later it should > still not appear on the OIM, since it can be recalculated. Output-only interfaces are a bit of a sore point. I regulary use a language called VHDL to describe hardware components. You define the interface to a component in terms of input signals and output signals (and a few others). You can't change an input signal (OK) but, also, you can't read an output signal. This may seem reasonable but its terribly inconvenient. The almost universal workaround is to declare an intermediate signal which can be both read and written; the output is then driven from this intermediate signal. Why do I bring this up? Well, it seems fine, in theory, to say "this value flows directly out of the component (domain)"; but, in practice, it just doesn't work. If you start out by putting it on the OIM then you won't need to re-work it later when you find that, in fact, you do want to know the value. > It's interesting to speculate about what exactly can be measured > from Things in the Real World and what *must* be calculated. For > example, is it possible to directly measure the density of a crate? I would say that this is largely irrelevent. You can probably devise intruments to measure almost anything; and then derive whatever information you want. A measurement is only a viewpoint on a concept; when the concept is used, a different viewpoint may be important. Plus, of course, how do you define the term "directly measure"? If you mean, it it possible to measure volume without measuring length, width, height, then the answer is yes (wrap it in waterproof plastic; completely submerge it in a tank of water: measure the amount of water displaced); it is also possible to measure density without measuring mass or volume (submerge the object in various liquids and test for neutral boyancy). These are not direct measurements; but they don't use any of the variables in the original equation. > > A derived attribute only has a value during the life of an action. > > What!? I thought your *derived attribute* was just another name for > a Mathematically Dependent Attribute which has a value during the > life of an object. If I'm wrong, could you define it again? Sorry, a typo (The sentence is taken from the middle of a paragraph where the context is transient attributes). Your synonym is correct. (Though, I have since realised that some intermediate derived attributes would have multiple values - therefore I've stopped saying that all the stepping-stones should appear on the OIM.) Dave. -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > Responding to Whipp... > > There are some details to quibble about, but I think we are basically down to one > issue: whether the new mindset can be used in enough situations. I agree that it > could be used widely, but I still think it would be worth extending it to allow > simple navigation replacement (e.g., for remote reads). You can always extend proposals. Its known as feature-creep ;-). Let me propose my own addition: a special guard associated with the "create instance" accessor - this would be used to define the initial value of an attribute. (Look at the video rental example, it should be obvious where to use it: hint, it's not legal to unilaterally change the price after a contract is agreed). We may be down to just one issue; but I'm sure we can stretch out the quibbling over a few more posts yet :-). > I would not regard a test as affecting the domain flow > of control unless an event was conditionally generated as a result. > However, you have a point that DFDs do not support test control flow > per se. I see this as another restriction that would limit the utility. I would argue that if a test doesn't influence the flow of control then it shouldn't be there. A test that operates purely as part of a localised algorithm should be encapsulated within a transform. We can, if you want, discuss the issue of iterative algorithms where the data-required depends on the processing of previous iterations; but I don't think that its relevant. ADFDs support test processes (and therefore control flows) because they are concerned with the domain flow-of-control. The proposed DFD would not support test processes because it is a static model of the mathematical dependencies between attributes. If the description of the dependency is algorithmic then there may be a localised test within the derivation function; but this does not directly effect the domain flow of control. > > Incorrect, I do not propose any non-deterministic maths. [...] > > I meant that the results can be dependent upon run time conditions. > For example, [sorting with comparison based on only part of an > attribute's value] It should not be possible to define such a dependency because, to do so, would imply that the source data is not in second normal form (no internal structure). You can, of course, use the standard counter to this normal form (a function with a M:1 mapping) to extract partial data. However, if you do this then you are explicitly stating, in the derivation, that you are using an equivalence class. If you, the analyst, are wrong, then don't blame the method. > The (M) attribute algorithm is restricted to situations where the > algorithm and the values are sufficient to determine the outcome, > including side effects, without actual execution. Please quote a reference. I will admit that I had not previously considered the issue of equivalence classes in the derivation - I had assumed that either: they would be disambiguated in the formula (or algorithm); or that the analyst *really* means "I don't care". I don't think that this is any different from (M) attributes. > > This looks to me like a macro (defined graphically). For my objections > > to these, see my initial post in the thread. > > I found the objection to object synchronous services, but not macros. > I even searched the first few messages for "macro". So I guess you'll > have to refresh my memory. Your right: I just did the same search; the only reference to macros was in a different thread where Neil Lang said he thought they are a good idea. My objection is approximately the same as for non-wormhole syncronous services: they hide the complexity instead of getting rid of it. Have you ever tried to maintain a C program which relies heavily on macros? > I agree this could be viewed as a kind of macro, but I don't see harm > in it. Because it uses the DFD formalism it is controlled, rigorous, > visible, and defined in only one place. If there is any weakness, I > would think it would lie in the link between DFD and ADFD. The important consideration is the order of the dependencies. The macro mindset uses the DFD as a layer *below* the ADFD: the ADFD becomes hierarchical, withthe DFD being the sub-layer (It may have a high fan-in, but its still hierarchical). This suffers the problems of hierarchical techniques such as functional decomposition (The main ones are that abstractions depend on details; and that you start wading through diagrams to follow a concept). The mindset I want to promote is that the DFD is a layer *above* the ADFD. The dependency is inverted. They are not related by hierarchy. The DFD does not exist to serve the ADFD. The ADFD uses attributes in the OIM; the DFD defines the mathematical relationships between attributes in the OIM. > It addresses (3) to the extent that it demonstrates that adding an > attribute to the OIM is not necessary in some situations (e.g., a > remote read access, assuming these are still necessary). If one can > provide a reasonable ADFD syntax for the link, then I don't see that > this is significantly looser than forcing "color" to be an attribute. > I would argue that since one of S-M's claims to fame is that it is > unambiguous, then if the notation is unambiguous it is sufficient for > the goals of the methodology. An unambiguous notation is important; but the method is more than the notation. Other goals include translatable and maintainable. A notational shortcut has no effect on translation (because it has no effect on the OOA-of-OOA) but it will effect maintanence. In my experience (personal opinion), macros are generally detremental to maintainability. > But I am proposing that with the introduction of a special accessor your > DFD idea can be extended to eliminate a lot more duplication in the ADFD. > There are already 11 accessors, so one more does not seem to be a large > price to pay for the benefit. B-) Of course this depends upon how > effective the new mindset is in eliminating the need for remote navigations. It also depends on the benefit of the new accessor - I worry that the benefit may be negative. In SMALL, you already have chained-navigation operator as a notational shortcut. The work done by a macro may not be as obvious. Why doesn't my proposal suffer the same problem? Because the derivation of a derived attribute does not exist for the benefit of the ADFD. > I understand the mindset issue. My problem is that I don't think you > know all the data flows when you create the OIM. You will know the > traditional (M)-like ones because there will be a nice, crisp formula > in the problem space that advertises the data flow. But the domain > solution is one mongo algorithm where events and data updates are > intertwined. That solution only starts to become clear when > functionality is allocated to active objects and FSMs are built. Only > when you have the action descriptions will you know when and where the > updates are done and what the consistency issues are. At that point > you could probably do a pretty good job on the DFD. But then I think > you would be in the position of backfilling the OIM to make the DFD > work right. Do you rememebr the thread a couple of weeks back, where you decided that an object shouldn't exist after you realised that it had 10^8 instances. You claimed then that this was not implementation pollution because you could justify the change in its own right, once the issue had been recognised. If a mathematical dependent attribute is discovered while building an ADFD; but can then be justified in its own right; would you allow me to claim that the process of discovery should not lead you to conclude that an attribute is artificial? > Agreed, but this backs up the point above that you could not have > known about the need for Parent.child_has_same_color_car when you > did the OIM and/or DFD. This is a result of where you decided to > answer the question, which you would not know until you make the > state models. If you decided to answer the question in Spoiled > Teenager, then the attribute would have to go there rather than in > Parent. (Admittedly, it would be easy to change but my point is > that the OIM is getting fixed after it is "completed" -- the > methodology's claim that you know when you are done with a model > is broken.) There are two issues: first, when do you realise that you want to ask the question; and secondly, where do you add the attribute that is most directly related to the answer to that question. Your argument is based on the latter. Consider your simple model > Parent <-------> Car > Parent<------->>Spoiled Teenager > Spoiled Teenager<-----> Car To start my answer, I'll upgrade the model a bit: Parent<------->> Car Parent<<------->>Spoiled Teenager Spoiled Teenager<----->> Car The important difference is that I've made the parent::teenage relationship M:M. There will, of course, be an associative object Having done this, where do I place the the attribute for quention "does the parent have any teenager with the same colour car?" There is only one option: on the parent. Putting it anywhere else would not answer the question. How about "does the teenager have a parent with the same colour car?" - again: only one option: it belongs on the teenager. Finally, "do this parent and that teenager have the same colour car?" Again, there is only one option: the associative object on the relatioship. With your model, the answer to that last question would be associated with the teenager, because thats only only place where it fits. To put it in the parent would break 1st normal form. However, if the analyst asks the question "why is the answer to this symmetrical question assocated with the teenager?" this may provoke the thought "Hmm, perhaps we need an associative object here, even though the relationship is 1:M and doesn't need it for relationship formalisation ... just a sec! a teenager could have 2 parents - of course it needs an assoc!" So, it is quite easy to argue that the location of the attribute can be determined without looking at the ADFDs. The only times when it is not immediately apparent are analagous to the question of where to formalise a 1:1 relationship. And, to backtrack a bit: how might you guess that the attribute is needed, before you build the ADFDs? Well, its quite possible that you will find the clue in the requirements. Of course, you may not notice it until you've build some ADFDs; but once its been identified, its inclusion in the OIM can be justified from the requirements, not the ADFD. > The DFD timing mode I referred to is when the traversal is triggered. > That can be when an input attribute is written or when the derived > attribute is read. I don't think the architecture can make this > decision arbitrarily; it will affect the way the analyst models just > as the rules from processing events off the queue affect the > modeling. I disagree. The answer to the calculation is independent of when it is calculated (unless you start doing silly things with equivalence classes: in that case, you don't care which result you get). > My intuition says that if the DFD flows are triggered when the > derived attribute is read, the model's karma will be in adjustment > most of the time. However, the thread earlier in the year about (M) > attributes suggests that this can be a slippery slope. Example: D and > V are changed externally always in pairs and the domain needs a > consistent M with the current pair but the D and V changes are on > separate bridge events. If M is read between the arrivals of a D and > V pair, this would not work correctly regardless of the DFD time model. Just a moment: you'll be using a "change-concept", won't you? When you receive the V, then you'll hold D constant and change M; when you change D, you'll hold V constant and change M. Within the model, the trio will always be consistant. Alternatively, use the change- concept that says "waiting for D" when you receive the initial V; and only sensitise the DFD to update M when the D is received; this is a slightly messy solution because it allows temporary inconsistancy under the control of the analyst; but such temporary inconsistancy is apparent in other parts of the method. The problem with the delay between the D and V inputs is nothing to do with the (M) attributes: its an analysis level protocol issue. (I have some ideas that would handle it, involving M:1 relationships between wormholes and SDFDs; buts lets deal with proposals one at a time) > I can work around it in several ways, but the point I was making is > that having a DFD span state actions introduces a new > level of complexity to the analysis. I think that the extra complexity goes into the architecture. A simple rule would be: "update attributes, that will be held constant, in the write accessor (before the write); update everything else in the read accessor". > > I tend not to send data on events, so the problem of events is less > > apparent (to me). This style may become prevailent if the DFD provides > > a solid data transport mechanism. Why bother to send the data on an > > event if its available as a local attribute anyway?. > > If you are not certain when the event will be processed, then for > consistency sake you have to pass the data on the event lest the data > be updated in an inconsistent manner before the event is processed. > Not terribly common, perhaps, but not rare either. Another variant > is the wormhole, where you don't have a choice. Allowing non-attribute data on an event introduces the problem regardless of derived attributes (That's probably why an attempt was made to close that loophole). It is always possible to constuct a model without passing data on events. My proposal does amplify the problem, though. The case of wormhole data can be argued a different way: only one side of the wormhole is within the domain. So there is no consistancy issue. The remaining problem revolves about the possibility of transforming the wormhole data before you store it in an attribute: the simple response is "don't do that". (Please provide an example if you think this rule would be too restrictive). > I believe there are common situations where this control is necessary > in the problem space. Any time you are using the asynchronous model > and you have to transfer control via an event you have the potential > for a data consistency problem if the input attributes are being > updated asynchronously. When it occurs this is an analysis issue, > not an implementation issue. The problems are primarily seen when you send data on an event. Again, the problem exists even without derived attributes; and there are appropriate analysis techniques to avoid problems. > I see it occurring whenever object A contains the input data, object B has the > derived attribute, and object C knows when it is appropriate to make the > calculation. Imagine A collects multiple samples from hardware on a periodic > basis, B holds an average of a consistent set, C controls updating the average > based upon a synch signal, and the bridge randomly reads the average from B. > It is up to the analyst to make sure the bridge gets consistent average values > relative to the synch rather than the current hardware samples. I'm sure you have a specific example in mind, but I have a feeling that you've contrived a solution to create the problem. One possible solution, given your solution, would be to create two attributes: continuous_average and sampled_average. Or possibly you'd decide to reorganise your bridges so that your domain outputs the current value of the average (using an output wormhole) when it receives the synchronisation signal. Or, given that the problem only arises because of the inter-domain communication, you could clean it up using an architectural feature that updates the average on the synchronisation signal: the domain never sees the synch signal. (This solution would be invalid if the problems can be associated with intra-domain issues). Yet another solution is another bit of feature-creep. A special accessor which controls guards on the DFD without any specific write accessor. (Then the SDFD that gets the synch signal would use this special accessor to move the data). This would be consistant with the view of the DFD as a datapath description; but would rather blow holes in any argument that assumes the DFD is a static description of mathematical dependency. If necessary, I can generate a few more solutions which don't require any new accessor processes. Dave. -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > I would argue that if a test doesn't influence the flow of control > then it shouldn't be there. A test that operates purely as part of > a localised algorithm should be encapsulated within a transform. > We can, if you want, discuss the issue of iterative algorithms where > the data-required depends on the processing of previous iterations; > but I don't think that its relevant. That is why I put the "" in my comment. A test can affect what value is placed in an attribute and that value may later (in another action) determine whether an event is generated. If this is the case I want to see how that data is updated when I am looking at STDs (i.e., I want visibility for the conditionality of the update at the STD level of abstraction). > The proposed DFD would not support test processes because it is a > static model of the mathematical dependencies between attributes. If > the description of the dependency is algorithmic then there may be a > localised test within the derivation function; but this does not > directly effect the domain flow of control. Then you are extending the DFD; a DFD only specifies that data _may_ flow, not _whether_ it will. [I don't have a problem with that, though, and I think you need it to make this approach more applicable.] > > I meant that the results can be dependent upon run time conditions. > > For example, [sorting with comparison based on only part of an > > attribute's value] > > It should not be possible to define such a dependency because, to do > so, would imply that the source data is not in second normal form > (no internal structure). You can, of course, use the standard counter > to this normal form (a function with a M:1 mapping) to extract > partial data. However, if you do this then you are explicitly > stating, in the derivation, that you are using an equivalence > class. If you, the analyst, are wrong, then don't blame the method. The example was simply to demonstrate concisely that there are run time issues that preclude a statically deterministic formula. The same thing can be demonstrated with normal data, it just gets more complicated; you need is a test in the algorithm that depends upon run-time values. > > The (M) attribute algorithm is restricted to situations where the > > algorithm and the values are sufficient to determine the outcome, > > including side effects, without actual execution. > > Please quote a reference. I will admit that I had not previously > considered the issue of equivalence classes in the derivation - I > had assumed that either: they would be disambiguated in the formula > (or algorithm); or that the analyst *really* means "I don't care". > I don't think that this is any different from (M) attributes. I was imprecise. I should have said, "The (M) attribute algorithm supported by OOA96...". As I read OOA96 you can't say V = f(M,D) where f is not defined at the domain level of abstraction. Among other things, it can't be simulated. I believe your approach opens this up to include dynamic algorithms because arbitrarily complex processing can be performed in the transforms. > Your right: I just did the same search; the only reference to macros > was in a different thread where Neil Lang said he thought they are a > good idea. My objection is approximately the same as for non-wormhole > syncronous services: they hide the complexity instead of getting rid > of it. Have you ever tried to maintain a C program which relies > heavily on macros? But I don't see that it is hidden any more than an ADFD hides state action complexity. It is exposed in the DFD. Also, I don't buy the C analogy; though the C macro is exposed in the header, its execution depends upon lexical context that may have side effects that are not at all obvious (e.g., which is why one parenthesizes arguments in C macros). This is not the case with the DFD. > The important consideration is the order of the dependencies. The > macro mindset uses the DFD as a layer *below* the ADFD: the ADFD > becomes hierarchical, withthe DFD being the sub-layer (It may have > a high fan-in, but its still hierarchical). This suffers the > problems of hierarchical techniques such as functional > decomposition (The main ones are that abstractions depend on > details; and that you start wading through diagrams to follow > a concept). > > The mindset I want to promote is that the DFD is a layer *above* > the ADFD. The dependency is inverted. They are not related by > hierarchy. The DFD does not exist to serve the ADFD. The ADFD > uses attributes in the OIM; the DFD defines the mathematical > relationships between attributes in the OIM. And a navigation path between objects isn't relevant to the OIM? It seems to me that you are saying that you can describe how attributes are related in the DFD but not how objects are. I do not see any distinction in the level of abstraction of the descriptions; the fact that the ADFD makes use of them doesn't imply a descending hierarchy any more than referencing a supertype attribute in a subtype implies the supertype is a descendent. > An unambiguous notation is important; but the method is more than > the notation. Other goals include translatable and maintainable. > A notational shortcut has no effect on translation (because it > has no effect on the OOA-of-OOA) but it will effect maintanence. > In my experience (personal opinion), macros are generally > detremental to maintainability. Yes, but this is not a macro in that sense -- it is exposed unambiguously and without side effects in the DFD just as the relationship between attributes is. My argument is simply that the details of navigation are not relevant at the ADFD level of abstraction so I'm not missing anything. I could argue that the details of navigation are quite relevant to the OIM and, therefore, your DFD is a better place to describe them. > It also depends on the benefit of the new accessor - I worry that the > benefit may be negative. In SMALL, you already have chained-navigation > operator as a notational shortcut. The work done by a macro may not > be as obvious. Why doesn't my proposal suffer the same problem? Because > the derivation of a derived attribute does not exist for the benefit of > the ADFD. It is true that SMALL makes navigation much less painful than in an ADFD because it eliminates the data store accesses for referential attributes. But it is still left with the maintenance problem because if you change a relationship in the OIM you have to modify all the SMALL accesses that used it. Your proposal would not help this but my extension would. > > I understand the mindset issue. My problem is that I don't think you > > know all the data flows when you create the OIM. You will know the > > traditional (M)-like ones because there will be a nice, crisp formula > > in the problem space that advertises the data flow. But the domain > > solution is one mongo algorithm where events and data updates are > > intertwined. That solution only starts to become clear when > > functionality is allocated to active objects and FSMs are built. Only > > when you have the action descriptions will you know when and where the > > updates are done and what the consistency issues are. At that point > > you could probably do a pretty good job on the DFD. But then I think > > you would be in the position of backfilling the OIM to make the DFD > > work right. > > Do you rememebr the thread a couple of weeks back, where you decided > that an object shouldn't exist after you realised that it had 10^8 > instances. You claimed then that this was not implementation pollution > because you could justify the change in its own right, once the > issue had been recognised. > > If a mathematical dependent attribute is discovered while building > an ADFD; but can then be justified in its own right; would you > allow me to claim that the process of discovery should not lead > you to conclude that an attribute is artificial? I argued that the change could be justified purely on OOA modeling technique. It happened that 10**8 was much more obvious and that triggered the correct thinking. But the model could have been correctly formulated without knowing the number of instances. My argument is that you cannot do the correct thinking for the OIM/DFD until you are doing state models. > There are two issues: first, when do you realise that you want > to ask the question; and secondly, where do you add the attribute > that is most directly related to the answer to that question. Your > argument is based on the latter. > And, to backtrack a bit: how might you guess that the attribute > is needed, before you build the ADFDs? Well, its quite possible > that you will find the clue in the requirements. Of course, you > may not notice it until you've build some ADFDs; but once its > been identified, its inclusion in the OIM can be justified from > the requirements, not the ADFD. All very nice, but this last paragraph is the key issue. What if I don't know which of those questions will be appropriate when doing the OIM? Which question is appropriate is more likely to depend upon how I apportion the larger functionality than on specific requirements (e.g., the law says you can't have two cars with the same color in a family with income over X so one will have to be repainted when the Breadwinner Raise event comes in). > > The DFD timing mode I referred to is when the traversal is triggered. > > That can be when an input attribute is written or when the derived > > attribute is read. I don't think the architecture can make this > > decision arbitrarily; it will affect the way the analyst models just > > as the rules from processing events off the queue affect the > > modeling. > > I disagree. The answer to the calculation is independent of when it > is calculated (unless you start doing silly things with equivalence > classes: in that case, you don't care which result you get). Not necessarily in the asynchronous view of time. The analyst has to know when data will be consistent. If the architecture updates M=VD when either V or D gets updated, the analyst will have to handle consistency for pairs of updates in the OOA but if it is calculated when M is read, then the analyst only needs to be careful when M is read. The models will be different. > > My intuition says that if the DFD flows are triggered when the > > derived attribute is read, the model's karma will be in adjustment > > most of the time. However, the thread earlier in the year about (M) > > attributes suggests that this can be a slippery slope. Example: D and > > V are changed externally always in pairs and the domain needs a > > consistent M with the current pair but the D and V changes are on > > separate bridge events. If M is read between the arrivals of a D and > > V pair, this would not work correctly regardless of the DFD time model. > > Just a moment: you'll be using a "change-concept", won't you? When you > receive the V, then you'll hold D constant and change M; when you > change D, you'll hold V constant and change M. Within the model, > the trio will always be consistant. Alternatively, use the change- > concept that says "waiting for D" when you receive the initial V; > and only sensitise the DFD to update M when the D is received; this > is a slightly messy solution because it allows temporary inconsistancy > under the control of the analyst; but such temporary inconsistancy > is apparent in other parts of the method. I missed something somewhere. How does "waiting for D" work? The DFD isn't a state machine. When a value of D was supplied, how would it know whether it should continue and calculate M or wait for a V? It seems to me that a change-concept would have to be provided, but this is supplied by the ADFD -- so the ball is back in the analyst's court. > > I can work around it in several ways, but the point I was making is > > that having a DFD span state actions introduces a new > > level of complexity to the analysis. > > I think that the extra complexity goes into the architecture. A simple > rule would be: "update attributes, that will be held constant, in the > write accessor (before the write); update everything else in the read > accessor". This will take some more convincing, but it may depended upon the "waiting for D" answer above. > The case of wormhole data can be argued a different way: only > one side of the wormhole is within the domain. So there is > no consistancy issue. The remaining problem revolves about the > possibility of transforming the wormhole data before you store > it in an attribute: the simple response is "don't do that". > (Please provide an example if you think this rule would be too > restrictive). To send a calculated value out of the domain you would have to either not use the DFD approach or create an OIM attribute for the value that was then passed to the bridge. I would regard the latter as a clear case of creating an artificial attribute that had little to do with the domain's abstraction. So I think this is one of the cases where the approach would not apply and it would be handy to have my read accessor extension. I agree that incoming data would almost always be attribute data because typically one would let the bridge handle this sort of thing. [One could argue that the wormhole paper doesn't allow this because it doesn't support the semantic shift that you and I would like to see.] > > I believe there are common situations where this control is necessary > > in the problem space. Any time you are using the asynchronous model > > and you have to transfer control via an event you have the potential > > for a data consistency problem if the input attributes are being > > updated asynchronously. When it occurs this is an analysis issue, > > not an implementation issue. > > The problems are primarily seen when you send data on an event. Again, > the problem exists even without derived attributes; and there are > appropriate analysis techniques to avoid problems. I disagree. They most often occur because you cannot predict when the event will be processed so you cannot guarantee data consistency for the target action's read accessors. > I'm sure you have a specific example in mind, but I have a feeling that > you've contrived a solution to create the problem. One possible > solution, > given your solution, would be to create two attributes: > continuous_average > and sampled_average. Or possibly you'd decide to reorganise your bridges > so that your domain outputs the current value of the average (using an > output wormhole) when it receives the synchronisation signal. Bridges are not necessarily involved. The synchronization signal I had in mind was an internal domain event (e.g., an iteration over collecting samples was completed). As a more specific example, we have done this sort of thing to average instrument readings when there is noise. The domain's specification defines the number of samples and taking a sample requires multiple events. In our case we also controlled when the average was used, but one could envision another fuzzy logic domain that periodically looked at the average asynchronously. It would be the domain's responsibility to ensure consistency by only calculating a new value when the sampling completion event arrived (and then sending an event to start collecting the next set of samples). -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > That is why I put the "" in my comment. A test can affect what > value is placed in an attribute and that value may later (in another > action) determine whether an event is generated. If this is the case I > want to see how that data is updated when I am looking at STDs (i.e., I > want visibility for the conditionality of the update at the STD level of > abstraction). If the discriminating attribute is written in one ADFD, and read in a different one (possibly in a different object) then I don't see the benefit of seeing the derivation in ADFDs over the DFD. You will still need to check more than one diagram. Indeed, using the attribute to control the test is very much in the spirit of the DFD. > Then you are extending the DFD; a DFD only specifies that data _may_ flow, > not _whether_ it will. [I don't have a problem with that, though, and I > think you need it to make this approach more applicable.] On the DFD, the guards specify the "whether". If a flow isn't guarded, then data does flow and derived attributes are updated (continuously). > The example was simply to demonstrate concisely that there are run time > issues that preclude a statically deterministic formula. The same thing > can be demonstrated with normal data, it just gets more complicated; you > need is a test in the algorithm that depends upon run-time values. It doesn't need a complex example to show that results depend on run-time values. a=b+c depends on the run-time values of b and c. I thought your point was that there could be a difference between the simulation results and the eventual program execution. An example of this is the find-one accessor. However, I don't see how derived attributes contribute to this issue significantly. > I was imprecise. I should have said, "The (M) attribute algorithm > supported by OOA96...". As I read OOA96 you can't say V = f(M,D) where f > is not defined at the domain level of abstraction. Among other things, it > can't be simulated. I believe your approach opens this up to include > dynamic algorithms because arbitrarily complex processing can be performed > in the transforms. But then, OOA 96 is very loose on what "f" can be. For (M) attrs, it just says "cite the formula or algorithm". Is the restult of "multiply" defined at the domain level, or in the architectural domain? or somewhere else? How about "mean-average", "rms", etc. I assume that anything that is valid for a transform process is also valid for a derivation formula. In OOA96, there was no simulation associated with (M) attributes because it was assumed that the value would be explicitly updated in the ADFDs (or SDFDs) using a transform process and a write accessor. If necessary, I can have a look for the post of Neil Lang where he clarified this point. > But I don't see that it is hidden any more than an ADFD hides state action > complexity. It is exposed in the DFD. > [...] > > The mindset I want to promote is that the DFD is a layer *above* > > the ADFD. The dependency is inverted. They are not related by > > hierarchy. The DFD does not exist to serve the ADFD. The ADFD > > uses attributes in the OIM; the DFD defines the mathematical > > relationships between attributes in the OIM. > > And a navigation path between objects isn't relevant to the OIM? It seems > to me that you are saying that you can describe how attributes are related > in the DFD but not how objects are. I do not see any distinction in the > level of abstraction of the descriptions; the fact that the ADFD makes use > of them doesn't imply a descending hierarchy any more than referencing a > supertype attribute in a subtype implies the supertype is a descendent. if you draw the dependency diagram for my proposal, I beleive it is: ADFD | DFD STD | | \ / | | v v OIM I beleive yours is DFD | v ADFD | v STD | v OIM I.e. if you change the ADFD (e.g. move some behaviour into a different object), you may need to change the DFD to construct new macros. Also, you don't know what macros you need until you've constructed the ADFDs. I am attempting to reduce the depth of the dependency tree by creating a DFD that has no dependencies with the ADFDs. > Yes, but this is not a macro in that sense -- it is exposed unambiguously > and without side effects in the DFD just as the relationship between > attributes is. My argument is simply that the details of navigation are > not relevant at the ADFD level of abstraction so I'm not missing > anything. I could argue that the details of navigation are quite relevant > to the OIM and, therefore, your DFD is a better place to describe them. > > [...] > It is true that SMALL makes navigation much less painful than in an ADFD > because it eliminates the data store accesses for referential attributes. > But it is still left with the maintenance problem because if you change a > relationship in the OIM you have to modify all the SMALL accesses that > used it. Your proposal would not help this but my extension would. A composed relationship might help you. If you define (on the OIM) R4 = R1+R2+R3 then you can use R4 in the ADFDs. If you later change this to R4 = R1+R2+R5+R3, then navigations across R4 aren't effected. This seems to be the gist of your ammendment. > > And, to backtrack a bit: how might you guess that the attribute > > is needed, before you build the ADFDs? Well, its quite possible > > that you will find the clue in the requirements. Of course, you > > may not notice it until you've build some ADFDs; but once its > > been identified, its inclusion in the OIM can be justified from > > the requirements, not the ADFD. > All very nice, but this last paragraph is the key issue. What if I don't > know which of those questions will be appropriate when doing the OIM? > Which question is appropriate is more likely to depend upon how I > apportion the larger functionality than on specific requirements (e.g., > the law says you can't have two cars with the same color in a family with > income over X so one will have to be repainted when the Breadwinner Raise > event comes in). Its nice that you give a specific example, It means I can do an analysis and give a definitive answer in a way that abstract handwaving can't. The important concept is that the answer to any question is information; and any information is placed on objects where its meaningful. In this case: We probably have an object called family. Families are composed of family members (which can be subtyped into parents and children, if necessary). Each family member has zero or more cars. We want to know if any two family members have the same colour car. A family has an income (the sum of the income of its members). We want to know if the income is greater then X. X is defined in the Law object. A family is subject to exacty one tuple of laws. I think thats a fair expansion of your requirement statement. I can now draw the OIM; but, this is an email, so I won't. I'll just state the objects and their attributes. (To keep things on one line, I've used rather poor names) Law: *id, X, Y, Z Family: *family_name, law_id(R), income(M), shared_car_color(M), X_exceeded(M) Family_Member: *family_name(R), *given_name, income, car_color(M) Car: *id, owner_family_name(R), owner_given_name(R), color Respray: *car_id(R), new_color I could now draw the DFD to show how the attributes are derived. I won't, because I don't think its necessary to answer your point. I will admit that I could have gone further. I could have created an attribute on a car that say's "needs a respay". However, I used by analyst judgement to decide that there is some control-flow policy needed to determine which car should be respayed; and to what color. This decision will be reflected in the ADFDs. So we don't have complete independence. However, I am able to make this decsion (and some others) before I start constructing the ADFDs; so you can't say that it depends on them. At this point, my mind started thinking "wouldn't it be nice if I could trigger an event when the X threshold is exceeded"; -- I'll leave that for another day! (I'll stick with your Breadwinner_Raise event for now) > Not necessarily in the asynchronous view of time. The analyst has to know > when data will be consistent. If the architecture updates M=VD when > either V or D gets updated, the analyst will have to handle consistency > for pairs of updates in the OOA but if it is calculated when M is read, > then the analyst only needs to be careful when M is read. The models will > be different. In either case, if the attribute is read at the wrong time, then an incorrect value will be returned. The value returned by the read accessor is the value of the most recent update. If the update is triggered by the read accessor; then this will be very up-to-date; but will depend on the most recently written values of V and D. If, OTOH, the update is triggered on every change to V or D, then the returned value will be a bit older, but will still depend on the most recently written values of V and D. In other words, there is no difference. > I missed something somewhere. How does "waiting for D" work? The DFD > isn't a state machine. When a value of D was supplied, how would it know > whether it should continue and calculate M or wait for a V? It seems to > me that a change-concept would have to be provided, but this is supplied > by the ADFD -- so the ball is back in the analyst's court. Its an abuse of the guard mechanism. I would have no objection to outlawing it as a rule in the method (but without such a rule, there's nothing to prevent it). It works like this. Guards are used to allow changes to be seen by an updated derived attribute. Until the guard lets it though, the change is not seen. So, suppose we have a dependence: M=DV; and we have a change-concept "both-values-arrived" which guards both flows. D and V are updated asynchronously; but we have a state machine (somewhere) which knows when we have a valid pair. When the first value is received (say, V) we write it to the attribute, but don't tell the guard to let it through (thus M still sees the old value). When the second value is recieved (D), we use the write accessor with "both-values-arrived" which lets makes the most recent values of D and V visible to M; which is then updated. As I said, this is probably an abuse of the guards (which were introduced to solve the mutual dependency problem. If I start using them in this way, it destroys the view of the DFD as a static dependency diagram. > > I think that the extra complexity goes into the architecture. A simple > > rule would be: "update attributes, that will be held constant, in the > > write accessor (before the write); update everything else in the read > > accessor". > This will take some more convincing, but it may depended upon the "waiting > for D" answer above. Its only one possible solution. The resoning goes like this: the value of a read accessor depends on the values of things-held-constants and on the most up-to-date values of everything else. The value of a constant depends on the current values of everything at the time the value is made constant; and things are made constant for the values before the write access. > To send a calculated value out of the domain you would have to either not > use the DFD approach or create an OIM attribute for the value that was > then passed to the bridge. I would regard the latter as a clear case of > creating an artificial attribute that had little to do with the domain's > abstraction. So I think this is one of the cases where the approach would > not apply and it would be handy to have my read accessor extension. See my reply to Mike Finn. Not seeing the value of the output within the domain can get very frustrating (and lead to the subsequent introduction of the attribute). The reason is feedback. As a simpler justification, if the domain is sending out information, then that information must be meaningful to the domain; and meaningful information belongs on the OIM. > Bridges are not necessarily involved. The synchronization signal I had in > mind was an internal domain event (e.g., an iteration over collecting > samples was completed). As a more specific example, we have done this > sort of thing to average instrument readings when there is noise. The > domain's specification defines the number of samples and taking a sample > requires multiple events. In our case we also controlled when the average > was used, but one could envision another fuzzy logic domain that > periodically looked at the average asynchronously. It would be the > domain's responsibility to ensure consistency by only calculating a new > value when the sampling completion event arrived (and then sending an > event to start collecting the next set of samples). This example doesn't work, because you can't guarentee when the event will be processed. If you send an event to say "synchronised"; then, by the time it is received, the synch may be lost. The only way to avoid this is with a handshake event to allow further sampling. Once you have such a mechanism, synchronisation is completely handled within the analysis, not the formalism. If you have done this, then synchronisation corncerns for the DFD within the formalism are irrelevent. Dave. -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. Daniel Dearing writes to shlaer-mellor-users: -------------------------------------------------------------------- Hi All, Does anyone know of any other vendors of Shlaer Mellor OOA tools other than the two obvious ones (Project Technology and Kennedy Carter)? Best Regards, Dan :-) PS I delivered my OOA presentation to my colleagues today and it went down well, I think we may have some converts "Leslie Munday" writes to shlaer-mellor-users: -------------------------------------------------------------------- There are many others, but I don't want to use this list to name vendors = (got into trouble for that once before) so I'll e-mail Dan the ones I = know of. I would like to extend Dan's question and ask 'are there any = Shlaer-Mellor tools that support ADFDs?', apart from the one I already = know about. Leslie. E-mail me directly if referencing Vendors. lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > > If this is the case I > > want to see how that data is updated when I am looking at STDs (i.e., I > > want visibility for the conditionality of the update at the STD level of > > abstraction). > > If the discriminating attribute is written in one ADFD, and read in a > different one (possibly in a different object) then I don't see the > benefit of seeing the derivation in ADFDs over the DFD. You will still > need to check more than one diagram. Indeed, using the attribute to > control the test is very much in the spirit of the DFD. I was responding to your assertion that the test should be in a transform. In that case it would not be visible on _any_ diagram. If you want to put tests with control flows in DFDs, that's fine with me. > > Then you are extending the DFD; a DFD only specifies that data _may_ flow, > > not _whether_ it will. [I don't have a problem with that, though, and I > > think you need it to make this approach more applicable.] > > On the DFD, the guards specify the "whether". If a flow isn't guarded, > then > data does flow and derived attributes are updated (continuously). Then you have a different book on DFDs than I do. Mine ("Diagranmming Techniques for Analysts and Programmers" by James Martin and Carma McClure) shows several variants but none have either guards or conditional flows. The book was written in 1985, so maybe someone has enhanced them since. I was under the impression that ADFDs were developed by evolving DFDs with control/conditional flows. > It doesn't need a complex example to show that results depend on > run-time values. a=b+c depends on the run-time values of b and c. I > thought your point was that there could be a difference between the > simulation results and the eventual program execution. An example > of this is the find-one accessor. However, I don't see how derived > attributes contribute to this issue significantly. You need to go to another level of complexity. If you have a choice between setting a=b+c and a=b*c and that choice is determined at run time by examining some other property, say a fuzzy sampling for some probabilistic variable, d, then you can't determine the value of "a" statically via application of a static formula. You can simulate it, though (although only one simulator on the market currently could). > But then, OOA 96 is very loose on what "f" can be. For (M) attrs, it > just > says "cite the formula or algorithm". Is the restult of "multiply" > defined > at the domain level, or in the architectural domain? or somewhere else? > How about "mean-average", "rms", etc. I assume that anything that is > valid > for a transform process is also valid for a derivation formula. > > In OOA96, there was no simulation associated with (M) attributes because > it > was assumed that the value would be explicitly updated in the ADFDs (or > SDFDs) using a transform process and a write accessor. If necessary, I > can > have a look for the post of Neil Lang where he clarified this point. Now that is interesting. If so, my argument does not apply. However, I was under the impression that the (M) was introduced so that the translation could handle the update automatically by applying a static formula. > > And a navigation path between objects isn't relevant to the OIM? > > if you draw the dependency diagram for my proposal, I beleive it is: > > ADFD > | > DFD STD > | | > \ / > | | > v v > OIM > > I beleive yours is > > DFD > | > v > ADFD > | > v > STD > | > v > OIM > > I.e. if you change the ADFD (e.g. move some behaviour into a > different object), you may need to change the DFD to construct new > macros. Also, you don't know what macros you need until you've > constructed the ADFDs. I am attempting to reduce the depth of the > dependency tree by creating a DFD that has no dependencies with > the ADFDs. I believe my diagram is exactly the same as yours for navigations. One can certainly define the navigations in the DFD immediately after the OIM is in place. The ADFD simply refers to that purely static description. Moreover, they would not change as the STDs and ADFDs were developed -- which I contend is not necessarily true of the attributes and DFDs for your write accesses where you may have to add or move attributes as the overall solution algorithm is defined. > A composed relationship might help you. If you define (on the OIM) > R4 = R1+R2+R3 then you can use R4 in the ADFDs. If you later change > this to R4 = R1+R2+R5+R3, then navigations across R4 aren't effected. > > This seems to be the gist of your ammendment. Not quite. This assumes (a) the only navigation I care about starts from the R1/R4 object and ends at the R3/R4 object and (b) that I have a relationship loop. Or are you proposing that I add artificial composed relationships to the OIM to reduce maintenance of SMALL navigations in actions? > > Which question is appropriate is more likely to depend upon how I > > apportion the larger functionality than on specific requirements (e.g., > > the law says you can't have two cars with the same color in a family with > > income over X so one will have to be repainted when the Breadwinner Raise > > event comes in). > > Its nice that you give a specific example, It means I can do an analysis > and give a definitive answer in a way that abstract handwaving can't. > The > important concept is that the answer to any question is information; and > any information is placed on objects where its meaningful. In this case: > > We probably have an object called family. Families are composed of > family members (which can be subtyped into parents and children, if > necessary). Each family member has zero or more cars. We want to > know if any two family members have the same colour car. A family has > an income (the sum of the income of its members). We want to know > if the income is greater then X. X is defined in the Law object. > A family is subject to exacty one tuple of laws. The problem with analyzing examples without the problem space context is that one tends to make up the context to fit the argument. Why do I need a Family? Why do I need a Law object? If they are not necessary to the domain's abstraction at the time you create the OIM, then you would be making them up simply to interpret the requirement in a fashion that allows you do define the DFD before the STDs. In effect I think you would be doing the STD analysis to determine the OIM objects. And that would move you into the responsibility camp where objects are defined based upon functionality rather than data. > In either case, if the attribute is read at the wrong time, then an > incorrect value will be returned. The value returned by the read > accessor is the value of the most recent update. If the update is > triggered by the read accessor; then this will be very up-to-date; but > will depend on the most recently written values of V and D. If, OTOH, > the update is triggered on every change to V or D, then the returned > value will be a bit older, but will still depend on the most recently > written values of V and D. In other words, there is no difference. I can't agree. The most recent values are not necessarily valid -- it was originally postulated that the inputs are updated in pairs and the derived value must be for a pair. So if one value for the latest pair has not been updated but the other has, the current values will not yield a valid derived value. How the analyst will guard against reading an invalid derived value in the OOA will depend upon when the update it done. > > I missed something somewhere. How does "waiting for D" work? The DFD > > isn't a state machine. When a value of D was supplied, how would it know > > whether it should continue and calculate M or wait for a V? It seems to > > me that a change-concept would have to be provided, but this is supplied > > by the ADFD -- so the ball is back in the analyst's court. > > Its an abuse of the guard mechanism. I would have no objection to > outlawing it as a rule in the method (but without such a rule, there's > nothing to prevent it). > > It works like this. Guards are used to allow changes to be seen by an > updated derived attribute. Until the guard lets it though, the change > is not seen. So, suppose we have a dependence: M=DV; and we have a > change-concept "both-values-arrived" which guards both flows. D and V > are updated asynchronously; but we have a state machine (somewhere) > which knows when we have a valid pair. > > When the first value is received (say, V) we write it to the > attribute, but don't tell the guard to let it through (thus M still > sees the old value). When the second value is recieved (D), we use > the write accessor with "both-values-arrived" which lets makes the > most recent values of D and V visible to M; which is then updated. > > As I said, this is probably an abuse of the guards (which were > introduced to solve the mutual dependency problem. If I start using > them in this way, it destroys the view of the DFD as a static > dependency diagram. OK, I misunderstood. I thought you were going for complete automation of the update process, but you still want to have the decision about when the DFD can proceed reside in the the STD/ADFD. (Now that I think about it, your way is the only way that makes sense since the guards have to be triggered from the ADFD.) I agree with the idea of an abuse to the extent that I think it supports my concern that you won't always know how to write the DFD until you do the STDs. However, I don't see such guarding as a major problem per se. This is mainly because I am only concerned with adding and moving attributes in the OIM -- if you let me do read navigations and use transient data, then I don't have to mess with the OIM and the DFD can be built/updated any time. Unfortunately this is also a trap. I agree that you want to do as much as possible to define the DFD prior to STDs. I also agree that you can probably do it most of the time if you spend some extra time thinking about things with the New Mindset. I just don't agree this will work all the time. By providing my extensions to deal with the exceptions this would also provide a somewhat mindless means for bypassing that thinking when one shouldn't. > See my reply to Mike Finn. Not seeing the value of the output within > the domain can get very frustrating (and lead to the subsequent > introduction of the attribute). The reason is feedback. As a simpler > justification, if the domain is sending out information, then that > information must be meaningful to the domain; and meaningful > information belongs on the OIM. I think the second argument is the strongest but I can't completely buy it, at least not for client domains. The value being output to the service domain is something that the service domain needs. It is calculated as a transient precisely because it is not of interest in the client domain -- the need for calculation is determined by the nature of the service being invoked. > > Bridges are not necessarily involved. The synchronization signal I had in > > mind was an internal domain event (e.g., an iteration over collecting > > samples was completed). As a more specific example, we have done this > > sort of thing to average instrument readings when there is noise. The > > domain's specification defines the number of samples and taking a sample > > requires multiple events. In our case we also controlled when the average > > was used, but one could envision another fuzzy logic domain that > > periodically looked at the average asynchronously. It would be the > > domain's responsibility to ensure consistency by only calculating a new > > value when the sampling completion event arrived (and then sending an > > event to start collecting the next set of samples). > > This example doesn't work, because you can't guarentee when the event > will be > processed. If you send an event to say "synchronised"; then, by the time > it > is received, the synch may be lost. The only way to avoid this is with a > handshake event to allow further sampling. Once you have such a > mechanism, > synchronisation is completely handled within the analysis, not the > formalism. > If you have done this, then synchronisation corncerns for the DFD within > the > formalism are irrelevent. Let me try to clarify the example. The synchronization event is generated in the domain when the collection of a set of sample measurements is completed. It is effectively the last event in a chain that collects the samples. The events that collect the samples happen to be self-directed, but one could introduce flags and wait states to ensure the sampling is completed prior to sending the last event announcing completion. The target action of the synchronization event (announcing completion of sampling) then calculates and writes the derived average. It then generates an event to start collecting another set of measurements. Meanwhile, any read access of the derived average value is valid until the collection of the next entire set of measurements is completed. As it happens I think there may be a way to do this using the DFD approach. We could have used five instances of a Measurement Sample object to hold the sample values, but we chose to use a running sum and count attributes in a Measurement object. Either way, one has to know which sample value is being updated. This implies that one knows which is the last one to be collected (the count determines this in our case). If so, one could do that last sample write with the "all-values-arrived" change-concept that would allow the derived average attribute to be updated consistently via the DFD. So I am closer to conversion, but not quite there yet. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > Responding to Whipp... > I was responding to your assertion that the test should be in a transform. In > that case it would not be visible on _any_ diagram. If you want to put tests > with control flows in DFDs, that's fine with me. >[...] > You need to go to another level of complexity. If you have a choice between > setting a=b+c and a=b*c and that choice is determined at run time by examining > some other property, say a fuzzy sampling for some probabilistic variable, d, > then you can't determine the value of "a" statically via application of a static > formula. You can simulate it, though (although only one simulator on the market > currently could). I am not suggesting you should put tests in the DFD. I am suggesting that derivation formulae and algorithms may contain implicit or explicit tests. For example, if an attribute p holds a value (0 or 1) which determines if I want a=b+c or a=b*c then I can write a = p ? b+c : b*c alternatively I could write a= p*(b+c) + (1-p)*(b*c) The latter version contains no test, but, if p = {0,1}, then it is equivalent. If this is a real derivation, with "meaning", then I see no advantage in using an explicit test process to do the calculation. It has no effect on the flow of control in the domain. (Though, obviously, the calculated value may later be used by a test process). It is this type of test that I said could be encapsulated within a transform. If you do use an explicit test process for the calculation; and then later use a test process to test the result (and thus generate an event) then you are actually using 2 test processes to make 1 decision. That doesn't feel right. > > On the DFD, the guards specify the "whether". If a flow isn't guarded, > > then > > data does flow and derived attributes are updated (continuously). > > Then you have a different book on DFDs than I do. Mine ("Diagranmming > Techniques for Analysts and Programmers" by James Martin and Carma McClure) > shows several variants but none have either guards or conditional flows. The > book was written in 1985, so maybe someone has enhanced them since. I was under > the impression that ADFDs were developed by evolving DFDs with > control/conditional flows. I don't think I've ever seen a DFD with guards. In the first post on the subject of DFDs, I did say that it was a DFD with a few special features. One of them is that I chose to use guards to control the flow of data. If think I wrote a reasonable summary of the required execution semantics at the time. I can think of quite a few DFD descriptions, including: DeMarco, Yourdon, Ward-Mellor, Hatley-Pirbhai , MOOSE, Shlaer-Mellor. The list is not exhaustive. All of them have slightly different semantics. I think that ADFDs were developed from the Ward-Mellor variety. > > A composed relationship might help you. If you define (on the OIM) > > R4 = R1+R2+R3 then you can use R4 in the ADFDs. If you later change > > this to R4 = R1+R2+R5+R3, then navigations across R4 aren't effected. > > > > This seems to be the gist of your ammendment. > > Not quite. This assumes (a) the only navigation I care about starts from the > R1/R4 object and ends at the R3/R4 object and (b) that I have a relationship > loop. Or are you proposing that I add artificial composed relationships to the > OIM to reduce maintenance of SMALL navigations in actions? Yes, I was suggesting you should add artificial loops; and yes, I would agree that they would probably be artificial. But they would still be correct (i.e. the problem domain would have that relationship). It seems no worse that adding your artificial flows into the DFD to reduce the SMALL navigations. Actually, I don't recommend either of these approaches. If you can't find a suitable abstraction then leave the complexity where it's both visible and annoying. > The problem with analyzing examples without the problem space context is that > one tends to make up the context to fit the argument. Why do I need a Family? > Why do I need a Law object? If they are not necessary to the domain's > abstraction at the time you create the OIM, then you would be making them up > simply to interpret the requirement in a fashion that allows you do define the > DFD before the STDs. In effect I think you would be doing the STD analysis to > determine the OIM objects. And that would move you into the responsibility camp > where objects are defined based upon functionality rather than data. I agree that one creates models to fit the mindset. However, in this case, I think I would most reject your critisisms. All my objects came from nouns in your requirement; so did the descriptive attributes (I had to invent some identifiers). If you want to clarify your requirement then I'll happily revise my model (that's what analysis is all about :-)). However, I doubt that you would be able to mangle your requirement to an extent that prevents me from deploying my derived attributes prior to constructing STDs. (Your specific points: the family object is a central noun in your description. I created a Law object becuase I wanted somewhere to put the "X" value without hard coding it into a formula. Your requirement suggested that it needed a scope grater than the family; and you used the noun "Law".) > > In either case, if the attribute is read at the wrong time, then an > > incorrect value will be returned. The value returned by the read > > accessor is the value of the most recent update. If the update is > > triggered by the read accessor; then this will be very up-to-date; but > > will depend on the most recently written values of V and D. If, OTOH, > > the update is triggered on every change to V or D, then the returned > > value will be a bit older, but will still depend on the most recently > > written values of V and D. In other words, there is no difference. > > I can't agree. The most recent values are not necessarily valid -- it was > originally postulated that the inputs are updated in pairs and the derived value > must be for a pair. So if one value for the latest pair has not been updated > but the other has, the current values will not yield a valid derived value. How > the analyst will guard against reading an invalid derived value in the OOA will > depend upon when the update it done. That is incorrect. Independent of when the update actually occurs, the value returned by the read accesor is the same. In all cases, the analyst must ensure that the value is valid. The most recent values may be invalid; but those are the values used. The analyst must make sure that an invalid value is not used. > [using guards to ensure only valid values are used] > > OK, I misunderstood. I thought you were going for complete automation of the > update process, but you still want to have the decision about when the DFD can > proceed reside in the the STD/ADFD. (Now that I think about it, your way is the > only way that makes sense since the guards have to be triggered from the ADFD.) Note that I am only controlling what values are updated. I am not controlling when the update is done. Whether you do the update in the read accesor, the write accessor, or somewhere between, the value returned by a read accessor is up-to-date. > However, I don't see such guarding as a major problem per se. This is mainly > because I am only concerned with adding and moving attributes in the OIM -- if > you let me do read navigations and use transient data, then I don't have to mess > with the OIM and the DFD can be built/updated any time. I'll let you use transient data (its actually necessary). If you can convince me that your shortcut navigations are meaningful (and predictable) independently of the ADFDs, then I may let you have them, too. I will not yet you use a change-concept to control the navigation of a read accessor (define a navigation- concept if you think it is necessary). > Unfortunately this is also a trap. I agree that you want to do as much as > possible to define the DFD prior to STDs. I also agree that you can probably do > it most of the time if you spend some extra time thinking about things with the > New Mindset. I just don't agree this will work all the time. By providing my > extensions to deal with the exceptions this would also provide a somewhat > mindless means for bypassing that thinking when one shouldn't. The problem is: if a derivation is not meaningful independently of the ADFDs, then it probably shouldn't be there. So providing mechanisms to allow these extra navigations is encouraging bad models. Given a choice between leaving a few navigations in the ADFDs, and polluting my DFD, I would choose the former (The term NIMBY springs to mind - "Not-In-My-BackYard") > > ...As a simpler > > justification, if the domain is sending out information, then that > > information must be meaningful to the domain; and meaningful > > information belongs on the OIM. > > I think the second argument is the strongest but I can't completely buy it, at > least not for client domains. The value being output to the service domain is > something that the service domain needs. It is calculated as a transient > precisely because it is not of interest in the client domain -- the need for > calculation is determined by the nature of the service being invoked. I'm sorry to keep saying this, but can you provide a specific example. I can't think of any case where I've send data out of a domain that was not meaningful to that domain. > Let me try to clarify the example. The synchronization event is generated in > the domain when the collection of a set of sample measurements is completed. It > is effectively the last event in a chain that collects the samples. The events > that collect the samples happen to be self-directed, but one could introduce > flags and wait states to ensure the sampling is completed prior to sending the > last event announcing completion. > > The target action of the synchronization event (announcing completion of > sampling) then calculates and writes the derived average. It then generates an > event to start collecting another set of measurements. Meanwhile, any read > access of the derived average value is valid until the collection of the next > entire set of measurements is completed. At the time that the action reads the average, it is valid. It later sends an event to say it no longer is needed. As soon as that event is dispatched, anyone reading the attribute should not expect to get a valid value. This is, I think, the purpose of putting explicit synchronoisation into the OOA. If a value is needed beyond the time-scope of the synchronisation, then we may have to conclude that you need a different attribute (i.e. the mathematical derivation is not true). > As it happens I think there may be a way to do this using the DFD approach. We > could have used five instances of a Measurement Sample object to hold the sample > values, but we chose to use a running sum and count attributes in a Measurement > object. Either way, one has to know which sample value is being updated. This > implies that one knows which is the last one to be collected (the count > determines this in our case). If so, one could do that last sample write with > the "all-values-arrived" change-concept that would allow the derived average > attribute to be updated consistently via the DFD. This would work. But perhaps the general case (where the synch isn't known when the attribute is written) needs a more powerful mechanism. Of course, we could use the "all-values-arrived" change-concept with a write accessor that sets count to its current value; but that sort of idiom demands a proper mechanism: the ability to activate a guard without writing an attribute. The only question is whether we would be able to determine that a guard is needed without writing the STDs. I think I could make a case that this is possible; but I won't even attempt it until I can convince you that we can determine the attributes prior to STDs :-). > So I am closer to conversion, but not quite there yet. :-) I'll keep working on it. Dave. -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > a = p ? b+c : b*c > If this is a real derivation, with "meaning", then I see no > advantage in using an explicit test process to do the calculation. > It has no effect on the flow of control in the domain. (Though, > obviously, the calculated value may later be used by a test > process). It is this type of test that I said could be > encapsulated within a transform. It all depends upon whether "a" is tested later to generate (or not) an event. If it is, then I certainly want to understand at the ADFD/DFD level how "p" affects that decision. In fact, I would go further and argue that unless the individual operations (b+c vs. b*c) were clearly described in the transform naming convention, then I would want to see them explicitly rather than just their transform. For me this is a deeper issue than your using DFDs. My position is that if one is going to claim the ability to simulate models, then everything needed for that simulation must be available formally and unambiguously in the models. At the time the expression is evaluated "b" and "c" represent the state of the system. If "a" will later determine whether an event is generated, then the value of "a" must reflect the state of the system (as opposed to some guess artificially passed to the simulator for path checking). The simulator can only determine this if the nature of the calculation is known. It can't do so if the calculation is in a transform. S-M claims simulation but not necessarily automated simulation. It is fair during manual simulation to look at the informal transform description and substitute a reasonable value based upon the same sort of implicit knowledge, pattern recognition, etc. that the Architect puts into the translation. I have always regarded this as a cop-out because manual simulation is not of much practical interest. [Possibly because of the inherent bias I have from having spent the better part of a decade developing large simulation models.] Also, I see debugging as very similar to simulation, so I want to do the flow of control analysis at the same level of abstraction -- the models -- without having to delve into transform implementations. As soon as I start doing that I may as well be debugging procedural code. But I digress... > > Or are you proposing that I add artificial composed relationships to the > > OIM to reduce maintenance of SMALL navigations in actions? > > Yes, I was suggesting you should add artificial loops; and yes, I > would agree that they would probably be artificial. But they would > still be correct (i.e. the problem domain would have that relationship). > > It seems no worse that adding your artificial flows into the DFD > to reduce the SMALL navigations. The flows I am adding to the DFD are not artificial -- they are already in the OIM. All I am doing is re-describing them in a format that is seamless with the ADFD. In adding them to the DFD I get the benefits of one fact/one place without having to modify the OIM. > I agree that one creates models to fit the mindset. However, in this > case, > I think I would most reject your critisisms. All my objects came from > nouns > in your requirement; so did the descriptive attributes (I had to invent > some identifiers). If you want to clarify your requirement then I'll > happily > revise my model (that's what analysis is all about :-)). However, I > doubt > that you would be able to mangle your requirement to an extent that > prevents > me from deploying my derived attributes prior to constructing STDs. > > (Your specific points: the family object is a central noun in your > description. I created a Law object becuase I wanted somewhere to put > the "X" value without hard coding it into a formula. Your requirement > suggested that it needed a scope grater than the family; and you used > the noun "Law".) Sure, doing nouns is a useful brainstorming technique, but only a small percentage of them appear as objects in the final OIM! Certainly the Law object is a major stretch. The crucial element is a description of behavior -- in particular how certain objects interact. What's the data associated with Law, its date of enactment? I would be real surprised to have Law be a valid object in a context involving Parent, Teenager, and Car. There probably is one somewhere, but I'll bet there are a whole lot that get along without it. I also argue that Family has essentially the same problem. In both cases I could have stated the requirements just as clearly without mentioning either "law" or "family". To me, the acid test is the data. The only attribute associated with those objects in the given context is the one you need to put someplace when the OIM is written but _before_ knowing how functionality is apportioned. I don't see that as a valid basis for defining objects. If the DFD were developed together with the STDs there would be no need for the objects. Regarding M=VD when (V,D) must be updated in pairs: > That is incorrect. Independent of when the update actually occurs, the > value returned by the read accesor is the same. In all cases, the > analyst > must ensure that the value is valid. The most recent values may be > invalid; but those are the values used. The analyst must make sure that > an invalid value is not used. We are miscommunicating here. To me, your last two sentences are contradictory. The attribute should _never_ contain an invalid value. This is clearly true when it is accessed asynchronously but I think that is generally true for the sake of robustness. This means that the analyst must control when it is updated. > I'll let you use transient data (its actually necessary). If you > can convince me that your shortcut navigations are meaningful > (and predictable) independently of the ADFDs, then I may let you > have them, too. I will not yet you use a change-concept to > control the navigation of a read accessor (define a navigation- > concept if you think it is necessary). They are meaningful and predictable because they simply reflect existing OIM relationships. The only choice lies in selecting the appropriate path (in a few cases). I think the navigation-concept is necessary because OIM relationship loops do not necessarily (though usually do) require that you arrive at the same instances by different paths. I can see a case, though, for defining separate concepts because they would be used in different modeling contexts; one is selection and the other is enabling. > > I think the second argument is the strongest but I can't completely buy it, at > > least not for client domains. The value being output to the service domain is > > something that the service domain needs. It is calculated as a transient > > precisely because it is not of interest in the client domain -- the need for > > calculation is determined by the nature of the service being invoked. > > I'm sorry to keep saying this, but can you provide a specific example. > I can't think of any case where I've send data out of a domain that > was not meaningful to that domain. First, I think "important" or "relevant" would be a better choice of words than "meaningful". Attributes might be meaningful without being relevant or important to the abstraction. Let's say you have a client that needs to play with our favorite M=VD but it needs to do this at various temperatures and pressures, holding M constant. To do this, it invokes a service domain that knows all about PV=nRT to calculate V. The client is going to have to pass that service domain a value of n that is computed from M and the gas type. [In such a simple case it would be tempting to place the conversion in the bridge, but I don't like smart bridges.] I argue that the client domain probably does not _care_ about the number of moles of gas despite the fact that moles is meaningful and the domain may know how to calculate it. At its level of abstraction I assert that M is all the domain needs to know about the mass. In this case I would see no need to create a derived attribute for n in the domain just to pass it out on a bridge because it is not important or relevant to the domain abstraction. [As soon as you use n in the domain itself, of course, its inclusion becomes justified.] > > Let me try to clarify the example. The synchronization event is generated in > > the domain when the collection of a set of sample measurements is completed. It > > is effectively the last event in a chain that collects the samples. The events > > that collect the samples happen to be self-directed, but one could introduce > > flags and wait states to ensure the sampling is completed prior to sending the > > last event announcing completion. > > > > The target action of the synchronization event (announcing completion of > > sampling) then calculates and writes the derived average. It then generates an > > event to start collecting another set of measurements. Meanwhile, any read > > access of the derived average value is valid until the collection of the next > > entire set of measurements is completed. > > At the time that the action reads the average, it is valid. It later > sends an event to say it no longer is needed. As soon as that event > is dispatched, anyone reading the attribute should not expect to get > a valid value. Recall that I stipulated that the access of the value could be asynchronous. There _must always_ be a consistent (relative to the included set of measurements) value available for that access. Therefore the synchronization event determines when the update can take place to ensure always having a consistent value, not whether one can access it or not. > This is, I think, the purpose of putting explicit > synchronoisation into the OOA. If a value is needed beyond the > time-scope of the synchronisation, then we may have to conclude that > you need a different attribute (i.e. the mathematical derivation > is not true). Remember this got started because I asserted that some updates needed to be determined (controlled) by the analyst rather than simply letting the architecture define a rule for updating. The mathematical derivation is still true, given that the time is valid for computing it. So I see no problem with solving this particular case correctly using your approach. The analyst simply has have control over when the update is done (i.e., updating the last measurement with "all-values-arrived"). > The only question is whether we would be able to determine that a guard > is needed without writing the STDs. I think I could make a case that > this > is possible; but I won't even attempt it until I can convince you that > we can determine the attributes prior to STDs :-). As the Family Lawyer example above demonstrates, I'm still not buying that. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > For me this is a deeper issue than your using DFDs. My position is > that if one is going to claim the ability to simulate models, then > everything needed for that simulation must be available formally and > unambiguously in the models. At the time the expression is evaluated > "b" and "c" represent the state of the system. If "a" will later > determine whether an event is generated, then the value of "a" must > reflect the state of the system (as opposed to some guess > artificially passed to the simulator for path checking). The > simulator can only determine this if the nature of the calculation is > known. It can't do so if the calculation is in a transform. I would disagree: transforms are simulatable. Sure, the method doesn't provide a way to specify what they do; but you can always provide models of transforms if your case tool doesn't provide a language for specifying their behaviour. > The flows I am adding to the DFD are not artificial -- they are > already in the OIM. All I am doing is re-describing them in a format > that is seamless with the ADFD. In adding them to the DFD I get the > benefits of one fact/one place without having to modify the OIM. Yes, but there are many more possible flows than actual: see below. Also, if they are already in the OIM, then why redescribe them later (This seems to violate the 1-fact rule). Apart from pushing some of the complexity of ADFDs under the carpet, what benefit do you actually gain? What is the 1-fact that your bringing into one place? (and is it really one fact?) > Sure, doing nouns is a useful brainstorming technique, but only a > small percentage of them appear as objects in the final OIM! > Certainly the Law object is a major stretch. The crucial element is > a description of behavior -- in particular how certain objects > interact. What's the data associated with Law, its date of > enactment? I would be real surprised to have Law be a valid object > in a context involving Parent, Teenager, and Car. There probably is > one somewhere, but I'll bet there are a whole lot that get along > without it. You stated "The law says ..." and then you associated a value "X" with it. Sure, I may have got the name wrong; and I don't have enough spec to guess its behaviour; but I do know that I need an object for the attribute "X"; and that this is associated with the law. > I also argue that Family has essentially the same problem. In both > cases I could have stated the requirements just as clearly without > mentioning either "law" or "family". Same thing - the names may be wrong, but the structure is in the requirements (though it may be obscured by other details). To disprove this statement, you only need 1 counter example. But this is getting a bit off topic. > To me, the acid test is the data. The only attribute associated with > those objects in the given context is the one you need to put > someplace when the OIM is written but _before_ knowing how > functionality is apportioned. I don't see that as a valid basis for > defining objects. If the DFD were developed together with the STDs > there would be no need for the objects. I'm sorry, I have absolutely no idea what your trying to say. Object responsibility is apportioned as part of the construction of an OIM (it goes in the object description) - you don't need an STD before you work out what you want an object to do. I sure don't see how codevelopment of STD+DFD would allow you to get rid of objects - that would suggest a rather poor initial analysis (though, mistakes will always be identified throughout the development process) > Regarding M=VD when (V,D) must be updated in pairs: > > Independent of when the update actually occurs, > > the value returned by the read accesor is the same. In all cases, the > > analyst must ensure that the value is valid. The most recent values > > may be invalid; but those are the values used. The analyst must make > > sure that an invalid value is not used. > We are miscommunicating here. To me, your last two sentences are > contradictory. The attribute should _never_ contain an invalid > value. This is clearly true when it is accessed asynchronously but I > think that is generally true for the sake of robustness. This means > that the analyst must control when it is updated. If a = b + c; then if either b or c are invalid, then a will also be invalid. In your case, you have V and D may be an inconsistant pair; and M=VD. So if V and D are inconsistant, then M will be wrong. However, the mathematical relationship will still be valid (M does equal V * D). I am willing to consider the possibility of using guards to prevent the invalid value; but without them, the analyst must find other ways of prevent anyone using an invalid value of M. > > If you can convince me that your shortcut navigations are meaningful > > (and predictable) independently of the ADFDs, then I may let you have > > them, too. > They are meaningful and predictable because they simply reflect > existing OIM relationships. The only choice lies in selecting the > appropriate path (in a few cases). The problem is, if you have 10 relationships, there may be 30 meaningful chained navigations. However, ADFDs may only use 3 or 4 of these. How do you predict, prior to doing STDs and ADFS, which chained navigations you actually want? > > I'm sorry to keep saying this, but can you provide a specific example. > > I can't think of any case where I've send data out of a domain that > > was not meaningful to that domain. > First, I think "important" or "relevant" would be a better choice of > words than "meaningful". Attributes might be meaningful without > being relevant or important to the abstraction. Possibly. I think "relevant" is better than "important". > Let's say you have a client that needs to play with our favorite M=VD > but it needs to do this at various temperatures and pressures, > holding M constant. To do this, it invokes a service domain that > knows all about PV=nRT to calculate V. The client is going to have > to pass that service domain a value of n that is computed from M and > the gas type. [In such a simple case it would be tempting to place > the conversion in the bridge, but I don't like smart bridges.] If we accecpt your assumption the PV=nRT is in a different domain, then I have to ask, why does my M=DV domain need to know about 'n'? You have assumed that it already knows the "gas type", so it can just share that (and the mass). I am quite happy to put such convertions within a bridge. As long as the algorithm isn't stateful, I see no problems. (I would allow stateful protocols in a bridge, but that's another story). > [As soon as you use n in the domain itself, of course, its inclusion > becomes justified.] And then I'd change my wormholes. I would say that, if you define a wormhole the passes out 'n', then that is a use; and so the inclusion of 'n' in the OIM is justified. In general though, an attribute that is only used by output-wormholes probably indicates some pollution. > Remember this got started because I asserted that some updates > needed to be determined (controlled) by the analyst rather than > simply letting the architecture define a rule for updating. The > mathematical derivation is still true, given that the time is valid > for computing it. So I see no problem with solving this particular > case correctly using your approach. The analyst simply has have > control over when the update is done (i.e., updating the last > measurement with "all-values-arrived"). The thing I am desparately trying to avoid, is the introduction of additional state information into the model through the DFD. Neither the values of transient nodes, nor guards, should need to be known to reconstruct the system's state. There are few rules that need careful definition; and even then, some state may find its way in. Dave. -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > I would disagree: transforms are simulatable. Sure, the method > doesn't provide a way to specify what they do; but you can always > provide models of transforms if your case tool doesn't provide a > language for specifying their behaviour. True, but the claim is that the OOA _model_ is simulatable. If you have to add action language code or other descriptions of the transforms that are outside the OOA, then that claim is not true. > > The flows I am adding to the DFD are not artificial -- they are > > already in the OIM. All I am doing is re-describing them in a format > > that is seamless with the ADFD. In adding them to the DFD I get the > > benefits of one fact/one place without having to modify the OIM. > > Yes, but there are many more possible flows than actual: see below. > Also, if they are already in the OIM, then why redescribe them later > (This seems to violate the 1-fact rule). Apart from pushing some > of the complexity of ADFDs under the carpet, what benefit do you > actually gain? What is the 1-fact that your bringing into one > place? (and is it really one fact?) One point of doing them in the DFD is that one defines only those paths that are needed. What one is describing is the navigation mechanism (e.g., the data store accesses implicit in the navigation). Put another way, the OIM describes What the referential mechanisms are and the ADFD/DFD describes How they are used. The one fact is that a given navigation in defined once in the DFD rather than many times in the ADFD (for each action where the navigation is traversed) -- one of your original justifications for the DFD. Regarding Family & Lawyer: I don't see that structure as applicable. I am _postulating_ that when I did the OIM the only objects I would need would be Parent, Teenager, and Car based upon the problem space. You are introducing OIM objects so that you can get around the problem of defining the location of an attribute before the STDs. You're steps are: (1) define the OIM based on problem space. (2) define the DFD. (3) whoops, there is a problem in the DFD; fix the OIM by rationalizing a new object that wasn't necessary before. > But this is getting a bit off topic. True. I think we have to agree to disagree. You see this as part of the New Mindset, but I see it as a breach of the idea that one should know when one is through with the OIM. > > To me, the acid test is the data. The only attribute associated with > > those objects in the given context is the one you need to put > > someplace when the OIM is written but _before_ knowing how > > functionality is apportioned. I don't see that as a valid basis for > > defining objects. If the DFD were developed together with the STDs > > there would be no need for the objects. > > I'm sorry, I have absolutely no idea what your trying to say. Object > responsibility is apportioned as part of the construction of an OIM > (it goes in the object description) - you don't need an STD before > you work out what you want an object to do. I sure don't see how > codevelopment of STD+DFD would allow you to get rid of objects - that > would suggest a rather poor initial analysis (though, mistakes will > always be identified throughout the development process) The main point is that objects in the OIM (other than associative objects required by the relational model) should have data attributes that are meaningful in the problem space. If the only data attribute is a derived attribute needed to make the DFD before STDs, then I don't think the object is justified. However, if you do the DFD before the STDs, this may be necessary. OTOH, doing the DFD with the STD does not affect the number of objects in the original OIM -- it simply determines the placement of the derived attribute among the already existing objects. > > Regarding M=VD when (V,D) must be updated in pairs: > If a = b + c; then if either b or c are invalid, then a will also be > invalid. In your case, you have V and D may be an inconsistant pair; > and M=VD. So if V and D are inconsistant, then M will be wrong. > However, the mathematical relationship will still be valid (M does > equal V * D). But the analyst cannot prevent M being used when it has been calculated incorrectly if the access is asynchronous. The analyst must control when a value is calculated to ensure that it is valid. > > They are meaningful and predictable because they simply reflect > > existing OIM relationships. The only choice lies in selecting the > > appropriate path (in a few cases). > > The problem is, if you have 10 relationships, there may be 30 > meaningful chained navigations. However, ADFDs may only use 3 or > 4 of these. How do you predict, prior to doing STDs and ADFS, > which chained navigations you actually want? By George, I think he's got it! Whether you are deciding where derived attributes live (if you don't allow my navigations) or what navigations you need can't be determined until you figure out at least where the functionality goes in the STDs. I accidently snipped the para where you assert that you know where the functionality lives when you do the OIM. We deliberately try to avoid this as much as possible. When we do OIMs we use some very informal use cases to verify that we _will be able_ to delegate the functionality in a reasonable fashion when we do the STDs. But this is very high level stuff (e.g., a preliminary OCM where we merely rationalize paths for the domain's functionality) to be sure we have all the objects and we would not be concerned with the details of particular attribute updates. > > Let's say you have a client that needs to play with our favorite M=VD > > but it needs to do this at various temperatures and pressures, > > holding M constant. To do this, it invokes a service domain that > > knows all about PV=nRT to calculate V. The client is going to have > > to pass that service domain a value of n that is computed from M and > > the gas type. [In such a simple case it would be tempting to place > > the conversion in the bridge, but I don't like smart bridges.] > > If we accecpt your assumption the PV=nRT is in a different domain, > then I have to ask, why does my M=DV domain need to know about 'n'? It doesn't need to know about it vis a vis its own processing, which is my point. However, it does need to know what sort of service it is invoking based upon the definition of the requirements flow in the DC. So "n" lives only as a transient calculation. > You have assumed that it already knows the "gas type", so it can > just share that (and the mass). I am quite happy to put such > convertions within a bridge. As long as the algorithm isn't stateful, > I see no problems. (I would allow stateful protocols in a bridge, > but that's another story). I am strongly biased against smart bridges. Our defect rates are integer factors higher in smart bridges than in domains themselves. I agree that logically there is a strong case for placing it there, but in practice it is not a good idea. > In general though, an attribute that > is only used by output-wormholes probably indicates some pollution. Not necessarily. Most hardware drivers do very little except temporarily store data in attributes until it can be written out. > > Remember this got started because I asserted that some updates > > needed to be determined (controlled) by the analyst rather than > > simply letting the architecture define a rule for updating. The > > mathematical derivation is still true, given that the time is valid > > for computing it. So I see no problem with solving this particular > > case correctly using your approach. The analyst simply has have > > control over when the update is done (i.e., updating the last > > measurement with "all-values-arrived"). > > The thing I am desparately trying to avoid, is the introduction of > additional state information into the model through the DFD. Neither > the values of transient nodes, nor guards, should need to be known > to reconstruct the system's state. There are few rules that need > careful definition; and even then, some state may find its way in. I don't see the problem. The alternative is the increment a count attribute with each input update and generate an event to the action that writes the derived attribute when the count is maxed. Either way it seems to me that the analyst is controlling the flow from the STD by either generating an event or invoking the "all-values-arrived" change-concept out of a given action. The DFD does not seem to be relevant to the state of the system. Put another way, with the DFD approach the state of the system defined literally by the state that the relevant instance is in. If it transitions to another state where the change-concept is invoked one knows whether the value of the attribute is valid or not (and what it is) when the action completes. In my view this is exactly the same as the alternative of transitioning to a state where a count is tested to determine if an event should be issued to cause the derived attribute to be updated. The system state is implicit in the count value and the instance state. Your way is just simpler because you don't need the count; all you need to know is where you are. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > Responding to Whipp... > > [use "simulation models" for non-simulatable bits of SM] > True, but the claim is that the OOA _model_ is simulatable. If you have to > add action language code or other descriptions of the transforms that are > outside the OOA, then that claim is not true. I an quite happy to claim that a SPICE netlist is simulatable, even though I know that I need models of the components. Ditto for Verilog (or VHDL) gate level netlist models. I suppose you could argue that the difference is that SPICE/gate models are provided by an independent modelling group; and that models are not created for a specific netlist. However, (a) that's just a process issue; and (b) its not actually true in all cases. > Regarding Family & Lawyer: > True. I think we have to agree to disagree. Yes, we do disagree. > > > Regarding M=VD when (V,D) must be updated in pairs: > > > > > If a = b + c; then if either b or c are invalid, then a will also be > > invalid. In your case, you have V and D may be an inconsistant pair; > > and M=VD. So if V and D are inconsistant, then M will be wrong. > > However, the mathematical relationship will still be valid (M does > > equal V * D). > > But the analyst cannot prevent M being used when it has been calculated > incorrectly if the access is asynchronous. The analyst must control when a > value is calculated to ensure that it is valid. Actually, if the access comes via an SDFD, then you can. However, I'll admit that for an attribute query interface, an SDFD is not always appropriate. > > > They are meaningful and predictable because they simply reflect > > > existing OIM relationships. The only choice lies in selecting the > > > appropriate path (in a few cases). > > > > The problem is, if you have 10 relationships, there may be 30 > > meaningful chained navigations. However, ADFDs may only use 3 or > > 4 of these. How do you predict, prior to doing STDs and ADFS, > > which chained navigations you actually want? > > By George, I think he's got it! Whether you are deciding where derived > attributes live (if you don't allow my navigations) or what navigations you > need can't be determined until you figure out at least where the functionality > goes in the STDs. We've just agree'd to disagree in this one :-) However, I think that the problem is harder for relationship navigations than for derived attributes. Its difficult to assign meaning to a chained navigation other than "its the combination of these two relationships". With a derived attribute such as "money in account", it has meaning even if you don't know its the difference between the credits and debits on the account (indeed in some cases, you don't have the appropriate history - you just have a pile of cash in a jar - in which case the money in the "account" is the sum of the values of the coins). However, I'm sure we can find examples of meaningful chains. > I accidently snipped the para where you assert that you know where the > functionality lives when you do the OIM. We deliberately try to avoid this as > much as possible. When we do OIMs we use some very informal use cases to > verify that we _will be able_ to delegate the functionality in a reasonable > fashion when we do the STDs. But this is very high level stuff (e.g., a > preliminary OCM where we merely rationalize paths for the domain's > functionality) to be sure we have all the objects and we would not be > concerned with the details of particular attribute updates. I think that this is probably the cause of the thing we've agree'd to disagree about. I assign responsibilities to objects as part of the construction of the OIM. I can use this to work out what derived attributes are needed, before I start building any other diagrams. If I don't know what an object does, then I can't add non-normalised attributes. > > You have assumed that it already knows the "gas type", so it can > > just share that (and the mass). I am quite happy to put such > > convertions within a bridge. As long as the algorithm isn't stateful, > > I see no problems. (I would allow stateful protocols in a bridge, > > but that's another story). > > I am strongly biased against smart bridges. Our defect rates are integer > factors higher in smart bridges than in domains themselves. I agree that > logically there is a strong case for placing it there, but in practice it > is not a good idea. You've got to be careful with this type of statistic. Domains are bigger than bridges, so can mask localised defect hot-spots. If we assume that, for the calculation of the values needed by a server: a1. for the calculation of the values needed by an input wormhole, the defect rate higher then for other calculations because it is defined outside the scope of the Object model; and a2. domains are bigger than bridges Then your would expect to see a higher defect rate in smart bridges than in domains. The defect rate is less in domains because the defect rate statistic is averaged over all elements in the domain or bridge. Domains contain a lot of stuff which (a) assumes has a lower defect rate: so the average is lower. So your statistic does not provide evidence in favour of your hypothesis: h1. the defect rate for the calculation depends on whether it is in the bridge or the domain In fact, let me make a different hypothesis: h2. The defect density for the whole system will be lower if that system places these calculations in smart-bridges. However, I want to add one more assumption first; it is currently invalid but it is necessary for my hypothesis: a3. The formalism for bridges is as rigorous as for domains. I think any follow-ups here would require a new thread. > > The thing I am desparately trying to avoid, is the introduction of > > additional state information into the model through the DFD. Neither > > the values of transient nodes, nor guards, should need to be known > > to reconstruct the system's state. There are few rules that need > > careful definition; and even then, some state may find its way in. > > I don't see the problem. The alternative is the increment a count attribute > with each input update and generate an event to the action that writes the > derived attribute when the count is maxed. The point is that I would like my system state to be defined in the OIM: the value of each attribute + the state of each state machine. However, if you think that this is being unnecessarily purist, then it'll simplify a few things. > Either way it seems to me that the > analyst is controlling the flow from the STD by either generating an event or > invoking the "all-values-arrived" change-concept out of a given action. The > DFD does not seem to be relevant to the state of the system. I was thinking that the value of derived attributes shouldn't be part of the state. i.e. they can always be recalculated from a system state which excludes them. If you allow that both derived attributes and transient nodes may hold their values when their inputs are guarded, thenthe problems go away. Its just that that doesn't feel aesthetically right. > Put another way, with the DFD approach the state of the system defined > literally by the state that the relevant instance is in. If it transitions to > another state where the change-concept is invoked one knows whether the value > of the attribute is valid or not (and what it is) when the action completes. > In my view this is exactly the same as the alternative of transitioning to a > state where a count is tested to determine if an event should be issued to > cause the derived attribute to be updated. The system state is implicit in > the count value and the instance state. Your way is just simpler because you > don't need the count; all you need to know is where you are. I think I'll concede this one. I was being overly pedantic. But I may still go for a rule that say's that you can't guard the input to a transient node; only to attributes (i.e. derived attributes can hold state; but transient nodes can't). Dave. -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- 'archive.9811' -- Subject: [Fwd: (SMU) Entangled Attributes] lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > > True, but the claim is that the OOA _model_ is simulatable. If you have to > > add action language code or other descriptions of the transforms that are > > outside the OOA, then that claim is not true. > > I an quite happy to claim that a SPICE netlist is simulatable, even > though I know that I need models of the components. Ditto for Verilog > (or VHDL) gate level netlist models. I should have been more precise. I was referring to the situation where the transform modified attribute data that was later tested to determine whether an event was generated. I have no problem with the transforms being ignored in the simulation otherwise. > I think that this is probably the cause of the thing we've agree'd to > disagree about. I assign responsibilities to objects as part of the > construction of the OIM. I can use this to work out what derived > attributes are needed, before I start building any other diagrams. > If I don't know what an object does, then I can't add non-normalised > attributes. And my argument is that the OIM is a static description (what it is) so one should not be concerned with dynamic issues (what it does) in that part of the model. > > I am strongly biased against smart bridges. Our defect rates are integer > > factors higher in smart bridges than in domains themselves. I agree that > > logically there is a strong case for placing it there, but in practice it > > is not a good idea. > > You've got to be careful with this type of statistic. Domains are > bigger than bridges, so can mask localised defect hot-spots. > > If we assume that, for the calculation of the values needed by a > server: > > a1. for the calculation of the values needed by an input wormhole, > the defect rate higher then for other calculations because it > is defined outside the scope of the Object model; and > > a2. domains are bigger than bridges > > Then your would expect to see a higher defect rate in smart bridges > than in domains. The defect rate is less in domains because the defect > rate statistic is averaged over all elements in the domain or bridge. > Domains contain a lot of stuff which (a) assumes has a lower defect > rate: so the average is lower. I don't agree. The metric is a defect _rate_ based normalized to code size. [We don't count reused architectural code in the domains.] In our earliest pilot project 1/3 of the code was in bridges and another 1/3 was in transforms (transforms and bridges have about the same defect rates). Over time we have made the bridges much simpler and moved that code into the domains. By your argument I would have expected an increase in the the domain defect rates approaching a third and a decrease in the bridge defect rates because they are far simpler. In fact, all the rates have remained essentially the same. I attribute this to a couple of things. First, the models are easier to grok. Second, the models have a rigorous formalism that guides them while the bridges and transforms are essentially just your basic procedural code. I think this is actually quite important because it provides the developer with a context that tends to support better self-checking before things get to review or test (i.e., before the errors are counted). Third, the CASE tool performs detailed checks on the OOA elements as they are entered but it merely checks the transforms and bridges for syntax (we use an action language) prior to generating code. [We don't record these errors as defects because they are corrected by the developer prior to any peer review or testing. We could launch a whole thread on whether one should count model syntax errors, but let's not go there.] -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Allen Theobald writes to shlaer-mellor-users: -------------------------------------------------------------------- Greetings! If I have two domains: A and B. And they are connected via a bridge or wormhole (AB). How is the code for AB usally realized? Is AB an object? Set of Functions? Assume we are translating to C++. Kind Regards, Allen lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Theobald... > If I have two domains: A and B. And they are connected > via a bridge or wormhole (AB). How is the code for AB usally > realized? Is AB an object? Set of Functions? Assume we are > translating to C++. You will probably get a lot of answers for this -- roughly one per person who has done an architecture. I speculate that the following sketch is probably the most common approach. Each domain implements a set of synchronous services corresponding to the bridge. Each synchronous service represents a different worm hole (i.e., a distinct communication). To do this in C++ you might have a Bridge class with methods representing all the wormholes in both domains. Or you might have a different Bridge class associated each domain and they talk to one another. Some would argue that this is more maintainable since the service domain defines an invariant API so only the client's bridge needs to change when a domain is swapped. The second approach is rather strongly suggested by the recent wormholes paper from PT. Assuming the second approach, then when a wormhole is invoked in an action of domain A, the synchronous service method for that wormhole is invoked. In that method one or more of domain B's synchronous services (i.e., the API that domain B presents to the outside word) will be invoked. [The wormholes paper assumes a 1:1 relationship between synchronous services, but in practice this is often not the case when dealing with third party service domains or when reusing your own domains.] A's wormhole service may also do some data operations, such as units conversions on the arguments prior to invoking B's synchronous service. [This is provided for in the wormholes paper.] You will note that only the wormhole method in A needs to have access to a header file for B's synchronous services. This preserves the idea that only the bridge has carnal knowledge of two domains. In this case the bridge synchronous services only have carnal knowledge of the domain APIs. But somebody has to write the synchronous service code to Do the Right Thing within a domain and that implicit means the author has to have detailed knowledge of what needs to be done. Things get a bit trickier in an asynchronous situation when domain A places an event on domain B's event queue. If domain B is expected to respond asynchronously, there has to be a mechanism for domain B to respond to the correct domain A entity at a later time (i.e., send a properly addressed event back to A). This is described as the transfer vector in the wormholes paper. In practice what happens is that domain A's synchronous service must encode a data packet with sufficient information to properly address a return event at a later time. This data packet is sent to domain B when B's synchronous service method is initially invoked. The domain B synchronous service places the appropriate event on the queue and it also stores away the encoded data packet for later use. Domain B doesn't know what is in the data packet but it does know when to send it back to domain B when processing is completed. It does so and the domain A synchronous service that B's wormhole method invokes decodes the packet and places the proper event on the queue. It's all quite simple if one of the heroes of one's youth was Machiavelli. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- Allen Theobald wrote: > If I have two domains: A and B. And they are connected > via a bridge or wormhole (AB). How is the code for AB usally > realized? Is AB an object? Set of Functions? Assume we are > translating to C++. As Lahman said, there's no right answer. Ask 10 architects and you'll get 20 different answers. If going to C++, with no specific optimisations on the bridge, then my preference is to use the GoF Adaptor pattern. The calling domain (A) provides its interface as an abstract class (or classes). The called domain (B) also provides its interface as a class (or classes). The implementation of class B (called) is provided within the code for domain B, so it doesn't actually need to be an abstract class. However, for dependency management, I generally make it an abstract class. Finally, the bridge class (AB) is derived from class A (i.e. it provides an implementation for A). This implementation class can directly call the interface of B (usually). If necessary, polymorphism can used to provide different bridges for different instances of the calling object. If domain A expects an asynchronous response then it should provide a callback class (the return vector). Because its a class, B can store it in an attribute; or send it as event data, as needed. B should see it as a pure interface. You may think that there is no need to provide an implementation in the bridge; but you'd be wrong. If B passes back data then that data will need to be mapped to the data domains of A. (if you reject smart bridges, then this isn't an issue) The most difficult case is where A expects a sychronous response an B provides an asynchronous response. If this is likely then you need to build the capability into the architecture. How you do it depends on your threading model. If I assume, for now, that you have a single-threaded architecture then the technique is to split state actions into a number of parts (depending on the number of wormholes in the action). Each wormhole that expects a sychronous response provides a callback object that calls the next part of the action. If you're clever, you can even keep the event queue running whilst waiting for the response. That deals with the basic wormhole-wormhole interface. However, I have found it useful to define a few different types of bridges. A second type is the "observer". This treats the domain as a white-box and can observe (but not modify) the value of any attribute in the domain (For example, a user-interface may want to display all relevant data about a subject matter - i.e. all attributes of a domain). The rational for this type of bridge is that providing a wormhole-API to every attribute in the domain is pointless: its tedious, error prone, horrible to maintain and non-value-added. Once you decide that you have this type of bridge, you need to decide whether you want the observer to poll the observed domain; or to be notified of changes (an intermediate solution is to be notified that a change occured; but then to poll to determine what the change was). This is an analyst decision, but it will effect the c++ implementation of the bridge. For this type of bridge, the observing domain (A) uses standard wormholes; but the observed (B) knows nothing about the bridge. Furthermore, it is quite possible that the definition of "atomic" (for normalisation) in the two domains may be different. The C++ implementation for this type of bridge is essentially the same as for the wormhole-wormhole bridge. The only difference is that the architecture constructs the interface to B, not the analyst. This makes it easier to argue a case for tighter coupling between the two. However, its probably better to stick to the adaptor untill you know that optimisation is needed: it keeps the architecture simpler. The third type of bridge is the architectural bridge. This is the bridge that provides a mapping from the OOA meta-model. I won't attempt to tell you how to implement this: it depends on your architecture. Just remember that translation onto an architecture is not the same thing as code generation. Dave. -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > Responding to Whipp... > > Lahman wrote: > > > I am strongly biased against smart bridges. Our defect rates are integer > > > factors higher in smart bridges than in domains themselves. I agree that > > > logically there is a strong case for placing it there, but in practice it > > > is not a good idea. > > > > You've got to be careful with this type of statistic. Domains are > > bigger than bridges, so can mask localised defect hot-spots. > > > > If we assume that, for the calculation of the values needed by a > > server: > > > > a1. for the calculation of the values needed by an input wormhole, > > the defect rate higher then for other calculations because it > > is defined outside the scope of the Object model; and > > > > a2. domains are bigger than bridges > > > > Then your would expect to see a higher defect rate in smart bridges > > than in domains. The defect rate is less in domains because the defect > > rate statistic is averaged over all elements in the domain or bridge. > > Domains contain a lot of stuff which (a) assumes has a lower defect > > rate: so the average is lower. > > I don't agree. The metric is a defect _rate_ based normalized to code > size. [We don't count reused architectural code in the domains.] In > our earliest pilot project 1/3 of the code was in bridges and another > 1/3 was in transforms (transforms and bridges have about the same > defect rates). Over time we have made the bridges much simpler > and moved that code into the domains. By your argument I would have > expected an increase in the the domain defect rates approaching a third > and a decrease in the bridge defect rates because they are far simpler. > In fact, all the rates have remained essentially the same. This doesn't seem to support your case against smart bridges. You seem to have said that moving code out of the bridges and into the domain makes no difference to the defect rate. You do seem to be confirming both my assumptions; although (a1) is weakened because you say that transforms have the same (high) bug rate. Its actually a bit difficult understand why the defect rate for your domains hasn't increased - your figures suggest that 2/3 of your domain code is now transforms (you've moved the bridge code into the domain) which you say has a higher bug rate. I suspect there may be an order-effect: your initial figurs were from your earliest pilot project; and things may have since matured. I should not be suprised that the bridge-code defect rate is unchanged: its still 100% bridge code, even if there's less of it. The fact that making it non-smart had no effect on the defect rate suggests that the smartness is not the cause of the defects. Finally, you forgot to quote my 3rd assumption, so I'll repeat it here: > > a3. The formalism for bridges is as rigorous as for domains. You give your own explaination of the statistic: > I attribute this to a couple of things. First, the models are easier > to grok. This depends on how you describe your bridges and interfaces. > Second, the models have a rigorous formalism that guides them while the > bridges and transforms are essentially just your basic procedural code. > I think this is actually quite important because it provides the > developer with a context that tends to support better self-checking > before things get to review or test (i.e., before the errors are counted). I think it is important to define you bridge meta-model before you get too involved with building them. The method doesn't define such a model; but its not too difficult to construct an OIM of your bridging requirements (Or, rather, it hasn't been on the projects I've worked on). Once you have a model, you can define a simple language with which to populate that model; and thus provide an environment in which you can write tools (scripts) which check the semantics of the bridge code. > Third, the CASE tool performs detailed checks on the OOA elements as > they are entered but it merely checks the transforms and bridges for > syntax (we use an action language) prior to generating code. In other words, the defect rate calculation is different for bridges an domains. Its a bit like having 2 modules of C code. The programmer for module A just compiles the code an runs some test cases. The programmer of B runs Lint, fixes the errors, runs the tests cases with Purify, and fixes those errors too. Upon review, you find that module A has more bugs than module B. If you were to conclude that programmer A introduces more bugs than programmer B then you'd have a 50% chance of being right. This may seem a rather obvious example, but its not really any different from your defect rate statistic. If you don't have a rigorous bridge model; and you perform less checks on the bridges than on the domains, then you'd expect bridges to have a higher defect rate. This does not allow you to conclude that bridges are more error-prone. I suppose I should point out that its impossible to draw any conclusions from a single data-point. All we can do is construct models and then define the statistics needed to confirm or falsify our hypotheses. We need a lot more data; and we need to make sure that we aren't comparing applies with oranges. Dave. -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > This doesn't seem to support your case against smart bridges. You > seem to have said that moving code out of the bridges and into the > domain makes no difference to the defect rate. Ah, I should have been more precise. The defect rates for each group (domain, bridge, transform) remains unchanged but the overall defect rate for the application is reduced because code moved from bridges to domains. > You do seem to be confirming both my assumptions; although (a1) is > weakened because you say that transforms have the same (high) bug > rate. Its actually a bit difficult understand why the defect rate > for your domains hasn't increased - your figures suggest that 2/3 > of your domain code is now transforms (you've moved the bridge code > into the domain) which you say has a higher bug rate. I suspect > there may be an order-effect: your initial figurs were from your > earliest pilot project; and things may have since matured. No, the ~ 1/3 code from the bridges literally moved into the domains rather than into transforms so we still have ~ 1/3 transform code. For example, in the averaging example I gave awhile back we used to do the averaging (i.e., invoking multiple hardware measurements) in the bridge because the domain just wanted a single measurement and the fact that the signals were noisy was a hardware problem. Now we average in the domain by a iteration through a set of states doing the measurement rather than invoking a transform (i.e., the only transform simply computes the average). [The domain asks the hardware how many iterations (1-N) are needed so we don't view this as domain pollution.] > I should not be suprised that the bridge-code defect rate is > unchanged: its still 100% bridge code, even if there's less of it. > The fact that making it non-smart had no effect on the defect rate > suggests that the smartness is not the cause of the defects. But total defects are down for the application because there is less code in the bridges with high defect rates and more in the domain with lower defect rates. > > I attribute this to a couple of things. First, the models are easier > > to grok. > > This depends on how you describe your bridges and interfaces. We do natural language, which isn't terribly rigorous. (It's also a process improvement issue to improve the natural language statements, but we aren't there yet.) > I think it is important to define you bridge meta-model before you > get too involved with building them. The method doesn't define > such a model; but its not too difficult to construct an OIM of > your bridging requirements (Or, rather, it hasn't been on the > projects I've worked on). Once you have a model, you can define a > simple language with which to populate that model; and thus provide > an environment in which you can write tools (scripts) which check > the semantics of the bridge code. True, an architectural model would help, but we don't have one. > In other words, the defect rate calculation is different for bridges an > domains. Its a bit like having 2 modules of C code. The programmer for > module A just compiles the code an runs some test cases. The programmer > of B runs Lint, fixes the errors, runs the tests cases with Purify, > and fixes those errors too. Upon review, you find that module A has > more bugs than module B. > > If you were to conclude that programmer A introduces more bugs than > programmer B then you'd have a 50% chance of being right. This may > seem a rather obvious example, but its not really any different from > your defect rate statistic. If you don't have a rigorous bridge model; > and you perform less checks on the bridges than on the domains, then > you'd expect bridges to have a higher defect rate. This does not allow > you to conclude that bridges are more error-prone. True, but we aren't measuring programmers or even trying to improve the product by testing out defects. We collect the data for process control in a defect prevention program. The observation is that defect rates for bridges and transforms are higher so these are the Fat Rabbits to attack. There are a variety of ways to solve the problem. You would build a meta model and develop associated translation tools. We chose the Don't Do That Anymore approach and eliminated smart bridges. Your solution is probably a better long term value but ours has the benefit of spending fewer short term resources on the solution. Our division places a _very_ high value on time to market so it is a very tough sell to divert resources for long term paybacks. [BTW, this particular discussion is also in serious danger of the angels-on-pinheads syndrome because we already have 5-sigma application defect rates on release from Engineering to Product Test. So on a recent 80 KLOC domain we saw 4 bridge defects, 5 transform defects and 3 domain defects total in Product Test and nine months in the field. (In this case the bridge was necessarily smart for complicated reasons having to do with third party software.)] -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- Well, this thread seems to be winding down. I've put my proposal, plus some examples (not ascii art!), on a web page at: http://www.geocities.com/SiliconValley/Platform/2512/sm/ Dave. -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. Allen Theobald writes to shlaer-mellor-users: -------------------------------------------------------------------- I have a question regarding implementing 1:1 relationships with STL's map facility. Assume I have object A and object B with a 1:1 relationship. S-M says I can put either the id of A into B or I can put the id of B into A. Is there ever a time I can put each id into the other? Does that violate anything? Those familiar with STL know that map.find(key) is O(log(n)), whereas map_iteration is O(n). I would like O(log(n)) searches of objects A and of objects B. Kind Regards, Allen David Foulkes writes to shlaer-mellor-users: -------------------------------------------------------------------- Allen, Is there any reason you couldn't implement your relationships using pointers rather than using the referential identifiers? David At 15:52 12/11/98 +0000, you wrote: >Allen Theobald writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >I have a question regarding implementing 1:1 relationships with STL's >map facility. > >Assume I have object A and object B with a 1:1 relationship. S-M says >I can put either the id of A into B or I can put the id of B into A. >Is there ever a time I can put each id into the other? Does that >violate anything? > >Those familiar with STL know that map.find(key) is O(log(n)), whereas >map_iteration is O(n). > >I would like O(log(n)) searches of objects A and of objects B. > >Kind Regards, > >Allen > > > > Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- Allen Theobald wrote: > I have a question regarding implementing 1:1 relationships with STL's > map facility. > > Assume I have object A and object B with a 1:1 relationship. S-M says > I can put either the id of A into B or I can put the id of B into A. > Is there ever a time I can put each id into the other? Does that > violate anything? > > Those familiar with STL know that map.find(key) is O(log(n)), whereas > map_iteration is O(n). > > I would like O(log(n)) searches of objects A and of objects B. To answer your question directly, I don't think that it violates normal form (though it feels like it ought to); but it does violate 1-fact-in-1-place. A more important answer is that the analysis doesn't determine what is, or isn't, possible in an implementation. If you're using a basic map implementation (It sounds like you're doing a one-map-per-object implementation) then you aren't worrying about efficiency. Therefore you should stick to the O(n) iteration until you do need to optimise. As soon as you start looking at optimisation, then thre are all sorts things you can do. One would be to add an attribute into the defining object (i.e. the one that isn't formalising) and make it (M). This should be done by the translation, not the analyst. A much more direct method of optimising a 1:1 relationship is to go the the O(1) algorithm where you link instances using pointers. (1:M relationships would add an appropriate container at the '1' end). You would have to decide whether you want O(log(n)) insertions and O(1) reads; or whether O(1) insertions and O(n) reads. There is no single correct answer. And before you do anything, have your translator scan the process table to make sure the relationship is, in fact, naviated in each direction. At the same time, look for any conditional searches which can be evaluated in advance. For example: if, in one direction, the navigation always looks for the instances with a maximal value of an attribute, then the appropriate container may be a priority queue, not a list/vector. Dave. -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Theobald... > I have a question regarding implementing 1:1 relationships with STL's > map facility. > > Assume I have object A and object B with a 1:1 relationship. S-M says > I can put either the id of A into B or I can put the id of B into A. > Is there ever a time I can put each id into the other? Does that > violate anything? First, there is a qualification in the if the relationship is 1:1c, then it must go on the 1c side. If I understand your question correctly, you are asking if the identifier can by placed on one side or the other dynamically when the relationship is instantiated. The answer is a marginal maybe. In the OOA you do not have a choice -- the Information Model is a static description and you need to pick one side or the other at model time. However, once you get into the RD it is _conceivable_ that the translation could choose to implement relationships in a way that might, for example, place a pointer to the other object on either side. For example, one could introduce a flag attribute in the translation that indicated whether the particular instance contained a pointer to the other instance. The architecture could provide mechanisms to check the flag and, if true, follow the pointer; otherwise search the other instances for those with a true flag and a pointer that contained the first instance's address. This is highly unlikely, though, because it would present a lot of problems for the translation. The translation still needs to preserve the referential integrity that is represented by the referential attribute abstraction. It is difficult to do this with simple architectural mechanisms like pointers in a foolproof manner -- the thought of Mean Time To Protection Violation < 10 minutes comes to mind -- that won't degrade performance (as in the hoaky example above). > Those familiar with STL know that map.find(key) is O(log(n)), whereas > map_iteration is O(n). > > I would like O(log(n)) searches of objects A and of objects B. This is a different issue. I am not familiar enough with STL to address that specifically. But I can point out that there is nothing to prevent the translation from implementing whatever will be the most efficient approach. In particular I would be very suspicious of an implementation that invoked any sort of search function to navigate a 1:1 relationship. The obvious way to avoid the problem is to introduce a pointer on both sides. The translation can easily do this for unconditional relationships or for the most common conditional relationships in a manner that preserves referential integrity (i.e., it handles creates and deletes correctly). Providing this sort of consistency is not so difficult and usually does not involve a significant performance penalty. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com "Lynch, Chris D. SDX" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Theobald (by responding to Whipp responding to Theobald): >> Assume I have object A and object B with a 1:1 relationship. S-M says >> I can put either the id of A into B or I can put the id of B into A. >> Is there ever a time I can put each id into the other? Does that >> violate anything? >To answer your question directly, I don't think that it violates >normal form (though it feels like it ought to); but it does violate >1-fact-in-1-place. The definitions of normal form predate the SM data modeling conventions, which I believe are unique in their use of referential attributes as an expression of relationship cardinality. So, anything you do with relational attributes is unrelated to questions of normalization, per se. Another historical footnote: before SM one just put, "1:1", "1:1c", etc on the arrow; the rest was left to the designer/implementer. But this left the process modelers in the lurch, without a standard means of expressing relationship traversal and manipulation. Thus, some implementation-like technique was needed (for specificity in process models and to grease the skids of execution of the models): SM-ers got pointers (i.e., referential attributes.) On the plus side, they are nifty when expressing relationships with constraints which can be expressed by collapsed referentials. On the minus side, they can be troubling to those people who are new to the method because of their similarity to a naive implementation. It gives the impression of inevitable performance problems, which need not be the case. -Chris ------------------------------------------- Chris Lynch Abbott Ambulatory Infusion Systems San Diego, Ca LYNCHCD@HPD.ABBOTT.COM ------------------------------------------- "Leslie Munday" writes to shlaer-mellor-users: -------------------------------------------------------------------- At 15:52 12/11/98 +0000, you wrote: >Allen Theobald writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >I have a question regarding implementing 1:1 relationships with STL's >map facility. > >Assume I have object A and object B with a 1:1 relationship. S-M says >I can put either the id of A into B or I can put the id of B into A. >Is there ever a time I can put each id into the other? Does that >violate anything? > >Those familiar with STL know that map.find(key) is O(log(n)), whereas >map_iteration is O(n). > >I would like O(log(n)) searches of objects A and of objects B. > >Kind Regards, > >Allen > Not familiar with STL, but.. Isn't the referential attribute location determined by the nature of the relationship? So if B is adjacent to A (no matter where A goes) then B needs a reference to A. But if A takes the same colour as B then A also wants a referential attribute to B. Therefore we have two different relationships between A and B, both passing referential attributes in opposite directions. Leslie GRAINGER A writes to shlaer-mellor-users: -------------------------------------------------------------------- I am currently in the process of writing a report for a university assignment on 'The Shlaer Mellor Method'. I would be grateful to anyone who could tell me where I can get any information on the topic. Best Reguards Alan Grainger. Daniel Dearing writes to shlaer-mellor-users: -------------------------------------------------------------------- You should probably try the Project Technology and Kennedy Carter web pages as your first ports of call http://www.projtech.com/ (the US home of Sally Shlaer and Stephen Mellor) http://www.kc.com/ (the UK leading authority on SM) These are probably the two most active companies in the Shlaer Mellor arena. Their web pages contain some interesting info about Shlaer Mellor Another link I found recently was this one http://cheetah.sdd.sri.com/eliot/ads/shlaer-mellor.html I hope these are useful. If you find any other major sources of info, I would be grateful if you would let me know. If you have any simple questions I would be happy to try and asnwer them. Best Regards, Daniel Dearing Plextek Limited Communications Technology Consultants Great Chesterford Essex UK >>> GRAINGER A 18/11/98 15:09:23 >>> GRAINGER A writes to shlaer-mellor-users: -------------------------------------------------------------------- I am currently in the process of writing a report for a university assignment on 'The Shlaer Mellor Method'. I would be grateful to anyone who could tell me where I can get any information on the topic. Best Reguards Alan Grainger. 'archive.9812' -- Dustin Oakley writes to shlaer-mellor-users: -------------------------------------------------------------------- First off, Im testing to see if I'm really subscribed to this list.. Assuming that I am- I'm just starting with S-M analysis, and I wonder if anyone would be willing to answer some questions I have regarding specific mappings of my understanding of the problem and current workflows into the Object Information Diagrams. Thanks Steve Mellor writes to shlaer-mellor-users: -------------------------------------------------------------------- Sure. Fire away! -- steve mellor At 09:41 AM 12/3/98 -0600, Dustin Oakley wrote: >Dustin Oakley writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >First off, Im testing to see if I'm really subscribed to this list.. >Assuming that I am- >I'm just starting with S-M analysis, and I wonder if anyone would be >willing to answer some questions I have regarding specific mappings of my >understanding of the problem and current workflows into the Object >Information Diagrams. >Thanks > > > "Michael M. Lee" writes to shlaer-mellor-users: -------------------------------------------------------------------- Sure, Justin, jump in. I'm sure you'll get plenty of responses! - Michael At 09:41 AM 12/3/98 -0600, you wrote: >Dustin Oakley writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >First off, Im testing to see if I'm really subscribed to this list.. >Assuming that I am- >I'm just starting with S-M analysis, and I wonder if anyone would be >willing to answer some questions I have regarding specific mappings of my >understanding of the problem and current workflows into the Object >Information Diagrams. >Thanks -------------------------------- M O D E L I N T E G R A T I O N Model Based Software Development 500 Botany Court Foster City, CA 94404 mike@modelint.com 650-341-2544(v) 650-571-8483(f) --------------------------------- Dustin Oakley writes to shlaer-mellor-users: -------------------------------------------------------------------- Thanks for all your friendly responses... I should clarify that when I say I am starting to do SM analysis, I have only read the 1st SM book (_Modeling the World in Data_), and resources on the web. I am attempting to create a problem level Object Information Diagram. The system will be for tracking ink consumption on a printing press. I am using Rational Rose for the models as per 'Developing S-M Models Using UML'. My first question is- Right now I am trying to model the workflow how it currently happens, with people reading meters, filling out paperwork, sending it to another dept, etc. Is that a correct first step in analysing the system? Assuming that I should be doing this, I have several situations such as the following: Operator reads a Counter Operator fills out a Ink Consumption Form Ink Consumption Form has fields including Counter Start and Counter Stop Will I have two referencial attributes in Ink Consumption Form which each have an association to Counter? If I do this for every field on the form, it seems like it will get very messy. (Most of the information being recorded relates to entities which are already involved in other relationships in other parts of the model.) Secondly, I have the following objects and relationships: Container may be placed in Location This is done by Operator. The fact of which operator placed the Container there is not important, but it seems that it should be captured in the model that it is Operators who fulfill this function. Any ideas? I'm anxiously awaiting my copy of _States_ which will hopefully shed some light on the domain partitioning aspect and other details I dont fully grasp yet. A final question- is it feasible to follow a SM methodology without a proper SM tool? I may end up being only a partial SM user otherwise. Thanks in advance for your help. peterf@pathfindersol.com (Peter J. Fontana) writes to shlaer-mellor-users: -------------------------------------------------------------------- At 05:54 PM 12/3/98 -0600, shlaer-mellor-users@projtech.com wrote: >Dustin Oakley writes to shlaer-mellor-users: >-------------------------------------------------------------------- > >Thanks for all your friendly responses... > >I should clarify that when I say I am starting to do SM analysis, I have >only read the 1st SM book (_Modeling the World in Data_), and resources on >the web. >I'm anxiously awaiting my copy of _States_ which will hopefully shed some >light on the domain partitioning aspect and other details I dont fully >grasp yet. I'd say you should wait until you get the "Object Lifecycles" book. It would be difficult to effectively do an Information Model class diagram without first understanding domain modeling "above" it, and Object Communication and State Modeling "below it". We have a few papers on our web site designed for the beginner to OOA/RD and UML. You may want to go to www.pathfindersol.com, and from our downloads section take a look at: - "Getting Started With OOA/RD" Version 1.0, 1997: This document is intended for the senior software professional investigating Obect Oriented Analysis and Recursive Design (OOA/RD). It outlines the first few step that should be taken to evaluate and understand the benefits of OOA/RD. - "Model Based Software Engineering - An Overview of Rigorous and Effective Software Development Using UML" Version 1.0, 10/28/98: This document provides a brief overview of Model-Based Software Engineering (MBSE), an effective method for developing high performance real-time, embedded, and other types of challenging software applications. This approach pairs the Object Oriented Analysis and Recursive Design process originally pioneered by Sally Shlaer and Stephen Mellor with Unified Modeling Language, the standard object-oriented modeling notation of the Object Management Group (OMG) - "OOA/RD Software Engineering Process" Version 1.2, 6/9/97: This document outlines the processes involved in effectively applying OOA/RD as a software engineering approach in the context of software product development. >A final question- is it feasible to follow a SM methodology without a >proper SM tool? I may end up being only a partial SM user otherwise. It is difficult to achieve "real world" productivity without reasonable tool support. The larger the project, the more important it is to have tooling that knows and correctly supports your software development method. _______________________________________________________ Pathfinder Solutions Inc. www.pathfindersol.com | 888-OOA-PATH | | effective solutions for software engineering challenges| | Peter Fontana voice: +01 508-384-1392 | peterf@pathfindersol.com fax: +01 508-384-7906 | _______________________________________________________| lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Oakley... > First off, Im testing to see if I'm really subscribed to this list.. It worked. You will find it is more reliable than OTUG in that this news list has never lost my registration while OTUG does that with annoying regularity. > Assuming that I am- > I'm just starting with S-M analysis, and I wonder if anyone would be > willing to answer some questions I have regarding specific mappings of my > understanding of the problem and current workflows into the Object > Information Diagrams. Sure. People, Dustin was looking for a methodology on OTUG and I convinced him this was the Path of True Enlightenment, so be nice. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Oakley... > I should clarify that when I say I am starting to do SM analysis, I have > only read the 1st SM book (_Modeling the World in Data_), and resources on > the web. I am attempting to create a problem level Object Information > Diagram. The system will be for tracking ink consumption on a printing > press. I am using Rational Rose for the models as per 'Developing S-M > Models Using UML'. Actually, you do need that 2nd book, as Fontana pointed out. It has an overview of the entire methodology while the first book is concerned simply with identifying and describing objects. The first thing that you need to during an analysis is to develop a Domain Chart. This is a kind of large scale partitioning of the system. [Unfortunately the "domains" referred to in the Data book (pg. 37) are not the same thing, though they indirectly reflect decisions made in the Domain Chart.] A Domain Chart partitions the system on the basis of subject matters, levels of abstraction, client/service relationships, and flow of requirements. In your case you will probably only have a couple of domain representing software you have to create, but you may have some other domains that represent hardware (e.g., the printing press, if it is monitored directly by the software). The core methodology steps are: (1) Create a Domain Chart, which includes defining the relationships between domains. (2) For a domain, create the Information Model, which is what the Data book is about. (3) For each domain, create the Object Communication Model, which defines overall communications (as opposed to relationships) between objects. (4) For each active object, define a state model, which describes the object's macro functionality. (5) For each state model action, define a process model that defines the object's micro functionality. > My first question is- Right now I am trying to model the workflow how it > currently happens, with people reading meters, filling out paperwork, > sending it to another dept, etc. Is that a correct first step in analysing > the system? Yes and no. You certainly have to think about and understand these issues as part of developing your requirements. This basic understanding will determine what abstractions you need and how they will interact. However, in steps (1) and (2) above you do not use workflow directly as the Responsibility people use it with use cases. Instead it merely provides guidelines as you evaluate domains and object abstractions. (It can be used in a rough way to identify domains, but the primary emphasis in domain selection is level of abstraction and flow of requirements.) This comes into play more directly in step (3) where you identify communications and, implicitly, allocate functionality among the objects. At this point I have already flooded this message with so many unfamiliar methodology concepts that it probably is not useful to you. I could go into greater detail, but I think it would be better to have that 2nd book under your belt first. I provide some rough answers below, but I suggest you ignore them until you read that book. > Assuming that I should be doing this, I have several situations such as the > following: > Operator reads a Counter > Operator fills out a Ink Consumption Form > Ink Consumption Form has fields including Counter Start and Counter Stop I think the Counter and even the Operator will be outside the tracking software domain. These activities will probably be manual even with the tracking software in place. At some point someone (in another department?) types in the Ink Consumption Form's data. This becomes a transaction message entering the Ink Tracking domain. The Ink Tracking domain may have a Ink Consumption Form object with the indicated attributes. That incoming message (from the domain bridge) will cause the Ink Consumption Form object to be created or updated. This may also trigger other activities in the Ink Tracking domain, depending upon the new values of the counter. You could model the Counter and Operator in a separate domain, say Printer Controller, but usually we would not do this because that domain will not generate any code (i.e., the domain is essentially a place holder on the Domain Chart for a "realized" or already existing domain). We would pick up the work flow at the point where it enters the computer. OTOH, if we decide to automate the Printer Controller and eliminate the operator person, we would model the domain internals with Counter, Operator, and other objects that exchanged messages (events) representing the flow of control as it (e.g., Printer) moved from state to state (e.g., from Initialized to Started Press Run, to Press Run Done, or whatever). > A final question- is it feasible to follow a SM methodology without a > proper SM tool? I may end up being only a partial SM user otherwise. You can do it, but it is painful and you give up a lot. The main things you give up are: - error checking. The CASE tools perform detailed checking like a language compiler. For example, Rose supports relational attributes but it doesn't really do anything with them -- the S-M tools would flag missing or incorrect relational attributes. You will eventually find the errors, but it will be later rather than earlier. - simulation. The S-M models can be manually simulated using the legendary (to us Old Timers) penny pushing technique. But this gets tired real quick for larger systems. You can't do this automatically except with CASE tools that specialize in S-M. For a small pilot project like yours, this may not be too painful. - code generation. The current state of the art is that the code generators create code that is too slow for performance sensitive applications, but for yours I doubt that this would be an issue. For a small, simple pilot project like yours, this is not a big deal. - custom GUIs. You don't have to "bend" anything to get it into the tool and the specialized tools are supposed to be designed to make S-M entry easy. (IMHO, this is not true, but that is because the GUIs are mostly '80s Unix paradigms.) -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Gregory Rochford writes to shlaer-mellor-users: -------------------------------------------------------------------- Dustin Oakley wrote: > > Dustin Oakley writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Thanks for all your friendly responses... > /snip/ > I'm anxiously awaiting my copy of _States_ which will hopefully shed some > light on the domain partitioning aspect and other details I dont fully > grasp yet. > Another book that I found informative is Leon Starr's "How to Build Shlaer-Mellor Object Models" ISBN 0-13-207663-2. > A final question- is it feasible to follow a SM methodology without a > proper SM tool? I may end up being only a partial SM user otherwise. > It's possible, it just takes more effort. I know I was happy when I could archive my paper and PostIt note models... > Thanks in advance for your help. best gr "Dana Simonson" writes to shlaer-mellor-users: -------------------------------------------------------------------- >I'm anxiously awaiting my copy of _States_ which will hopefully shed some >light on the domain partitioning aspect and other details I dont fully >grasp yet. I would recommend you get Leon Starr's book "How to Build Schlaer-Mellor Object Models" ISBN 0132076632. (Amazon's 54,672nd best selling book :-) ) He presents a practical approach to understanding the system BEFORE beginning to model it. From your brief comments, I think Leon's approach of sketching out informal diagrams first may help you to decide what needs to be modeled. <<<>>><<<>>><<<>>><<<>>><<<>>><<<>>><<<>>><<<>>> Dana Simonson Engineering Section Manager Transcrypt Operations - Waseca dsimonson@transcrypt.com www.transcrypt.com "Leslie" writes to shlaer-mellor-users: -------------------------------------------------------------------- ----- Original Message ----- From: Dustin Oakley To: Sent: Thursday, December 03, 1998 3:54 PM Subject: Re: (SMU) Question :Dustin Oakley writes to shlaer-mellor-users: :-------------------------------------------------------------------- : :Thanks for all your friendly responses... : :A final question- is it feasible to follow a SM methodology without a :proper SM tool? I may end up being only a partial SM user otherwise. : :Thanks in advance for your help. : Dustin, I use Rose for S-M modeling. It's only good for class diagrams. The support for STDs and functional sepcification is next to useless. But when that's all one has, you have to make the best of the tools you've got. Why not e-mail the model to me, so I can look at in conjunction with your mail? Leslie. "Neil Lang" writes to shlaer-mellor-users: -------------------------------------------------------------------- >Dustin Oakley writes to shlaer-mellor-users: >-------------------------------------------------------------------- ..... some deletia... >Assuming that I should be doing this, I have several situations such as the >following: >Operator reads a Counter >Operator fills out a Ink Consumption Form >Ink Consumption Form has fields including Counter Start and Counter Stop >Will I have two referencial attributes in Ink Consumption Form which each >have an association to Counter? If I do this for every field on the form, >it seems like it will get very messy. (Most of the information being >recorded relates to entities which are already involved in other >relationships in other parts of the model.) It's very important to distinguish between objects and forms. A form is (almost) never an object in your model. Rather each field on the form corresponds to an attribute of some real object in your model. I found that I could never stress this point too much during training classes. In practice forms turn out to helpful in abstracting objects; look at each field and determine what object it fundamentally describes. You'll often discover new attributes and even objects this way. Hope this helps a bit. > Neil Lang > Senior Instructor/Curriculum Developer > Siebel Systems, Inc. > Voice: (510) 594-6688 > FAX: (510) 594-6128 > Email: nlang@siebel.com > Daniel Dearing writes to shlaer-mellor-users: -------------------------------------------------------------------- Hi All, Does anyone have any experience of applying design patterns to Shlaer Mellor Developments or can anyone recommend a good source of information about design patterns? (specifically in the context of SM developments). Best Regards, Daniel Dearing Ed Wegner writes to shlaer-mellor-users: -------------------------------------------------------------------- >>> Daniel Dearing December 8, 1998 5:26 am >>> Daniel Dearing writes to shlaer-mellor-users: -------------------------------------------------------------------- > >Hi All, > >Does anyone have any experience of applying design patterns to >Shlaer >Mellor Developments or can anyone recommend a good source of >information about design patterns? (specifically in the context of >SM developments). > >Best Regards, > >Daniel Dearing > There were a couple of postings from katherine lato at lucent back in May of 1996 on how she was applying design patterns to the translation process - I recall a pattern about having a central or common mechanism for storing "unconsumed" events. I don't recall any other discussions on the subject of any substance in this forum. Regards, Ed Wegner lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Daniel, > Does anyone have any experience of applying design patterns to Shlaer > Mellor Developments or can anyone recommend a good source of information > about design patterns? (specifically in the context of SM developments). Sadly only one participant I know of uses design patterns -- Katherine Lato. She tried to stir up some interest awhile back but was disappointed at the militant apathy. I am not sure if she is still lurking. If she isn't you might try her directly at lato@ih4ess.ih.att.com I believe she would be interested in sharing her experience with them. I have no idea why S-M people seem reluctant to embrace design patterns. My speculation is that there are no champions on the forum as there are on OTUG. In our shop it is simply that no one has taken the time to learn enough about them to make a proper evaluation -- I am the only one who knows what they are and my knowledge stems solely from scanning the GOF book. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > I have no idea why S-M people seem reluctant to embrace design patterns. My > speculation is that there are no champions on the forum as there are on > OTUG. In our shop it is simply that no one has taken the time to learn > enough about them to make a proper evaluation -- I am the only one who knows > what they are and my knowledge stems solely from scanning the GOF book. I'm not sure that its a reluctance. Its a matter of relevance. If I am constructing a translator then I will often dscribe its features in terms of design patterns. For example, a wormhole-bridge is generating according to the adaptor pattern; and subtyping will often use the state pattern. However, if I'm defining a translator, am I really using design patterns? Sure, I'm using the class structures: but design patterns are really concerned with describing and resolving forces. Is a set of translation rules a pattern language? I'd be inclined to say no, though the content is pretty similar. Analysis patterns are a different matter. Certain patterns do repeat themselves in analysis models. Leon Starr's book describes a few: not in the form of a pattern language; but in a way that does present the forces surrounding them. So, although he doesn't call them patterns, that's what they are. My own approach, when I see recurring patterns, is to think "hmm, something is repeating itself: that suggests commonality: how can I abstract in into a domain?". The bridge to this new domain is then a description of how to apply the pattern thats described in that domain. So, the OOD camp place more emphesis on design patterns, because they need to use them. If you're doing SM, then you write a translator instead of a pattern language. That said, there are always some commonalities that I choose not to abstract into a formal domain. Dave. -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. Daniel Dearing writes to shlaer-mellor-users: -------------------------------------------------------------------- David, (and everyone else who responded) Thanks for the response(s). By way of some background information, my interest in "design patterns" is due to the fact that I am in the process of introducing an OO method and associated CASE tools into our organisation, and I was wondering if the use of "design patterns" would gain our team (with limited SM experience) some leverage in the early phases of the project. >From your email below, you seem to be saying that there are "design" patterns which can be considered as translators (or bits of translators) and there are "analysis" patterns which are recurring analysis model patterns. If I have understood you correctly, I think what I was really asking about was Analysis patterns (or both). When I started doing SM development, I thought that every application would be entirely different from all others. But now, after being in the industry for several years, I just can't help thinking that people must have modelled things like Repository domains and GUI domains and network management domains etc that must be present in many systems and hence, as you suggest, common patterns could be abstracted. I am trying to invent as few new wheels as possible and I wondered if patterns only existed in the minds of clever experienced software engineers, or if the patterns that are documented in the book by the "Gang of four" or elsewhere had been successfully used by anyone. Best Regards, Daniel Dearing >>> Dave Whipp 08/12/98 16:26:37 >>> Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > I have no idea why S-M people seem reluctant to embrace design patterns. My > speculation is that there are no champions on the forum as there are on > OTUG. In our shop it is simply that no one has taken the time to learn > enough about them to make a proper evaluation -- I am the only one who knows > what they are and my knowledge stems solely from scanning the GOF book. I'm not sure that its a reluctance. Its a matter of relevance. If I am constructing a translator then I will often dscribe its features in terms of design patterns. For example, a wormhole-bridge is generating according to the adaptor pattern; and subtyping will often use the state pattern. However, if I'm defining a translator, am I really using design patterns? Sure, I'm using the class structures: but design patterns are really concerned with describing and resolving forces. Is a set of translation rules a pattern language? I'd be inclined to say no, though the content is pretty similar. Analysis patterns are a different matter. Certain patterns do repeat themselves in analysis models. Leon Starr's book describes a few: not in the form of a pattern language; but in a way that does present the forces surrounding them. So, although he doesn't call them patterns, that's what they are. My own approach, when I see recurring patterns, is to think "hmm, something is repeating itself: that suggests commonality: how can I abstract in into a domain?". The bridge to this new domain is then a description of how to apply the pattern thats described in that domain. So, the OOD camp place more emphesis on design patterns, because they need to use them. If you're doing SM, then you write a translator instead of a pattern language. That said, there are always some commonalities that I choose not to abstract into a formal domain. Dave. -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. Brad_Appleton-GBDA001@email.mot.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Dave Whipp writes: > My own approach, when I see recurring patterns, is to think "hmm, > something is repeating itself: that suggests commonality: how can I > abstract in into a domain?". The bridge to this new domain is then a > description of how to apply the pattern thats described in that domain. > > So, the OOD camp place more emphesis on design patterns, because they > need to use them. If you're doing SM, then you write a translator > instead of a pattern language. That said, there are always some > commonalities that I choose not to abstract into a formal domain. Hmmn - some of that sounds a lot like the stuff in Jim Coplien's recent book on Multi-Paradigm Design for C++. Its primarily about conducting commonality and variability analysis for a given domain (in this case C++) within a given scope. There is a more general article about it in the November/December IEEE Software. "Commonality and Variability in Software Engineering" by James Coplien, Daniel Hoffman, and David Weiss IEEE Software Vol.15, No. 6, November/December 1998, pp. http://www.bell-labs.com/~cope/Mpd/IeeeNov1998/ The book, "Multi-Paradigm Design for C++" is available from Addison Wesley. I'm sure Cope has a link to it from his homepage at: http://www.bell-labs.com/~cope/. Cope also has some slides for a presentation he gave about the book at: http://www.bell-labs.com/~cope/Talks/Patterns/cpg19981110/ Some of the stuff there could prove quite helpful in discovering techniques for systematically translating some things into C++, and the general ideas could conceivably be helpful for other translators in other subject and/or implementation domains. Cheers! -- Brad Appleton | http://www.enteract.com/~bradapp/ "And miles to go before I sleep." | 3700+ WWW links on CS & Sw-Eng lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > However, if I'm defining a translator, am I really using design patterns? > Sure, I'm using the class structures: but design patterns are really > concerned with describing and resolving forces. Is a set of translation > rules a pattern language? I'd be inclined to say no, though the content > is pretty similar. Perhaps not a pattern language, but I would think translation rules are a fertile field for design patterns. For nuts & bolts OOA constructs there are only so many architectural constructs to map. For a basic architecture I could see colorization being simply the selection of the appropriate pattern to use during translation for, say, a 1:M relationship. Similarly, I can envision a small handful of patterns for handling things like instance locking under the simultaneous view of time. > Analysis patterns are a different matter. Certain patterns do repeat > themselves in analysis models. Leon Starr's book describes a few: not in > the form of a pattern language; but in a way that does present the > forces surrounding them. So, although he doesn't call them patterns, > that's what they are. Yes, in the OOA one could identify any number of patterns because one is modeling arbitrary problem space structures. OTOH, the S-M notation is a lot simpler than UML and many of the design pattern features might only be relevant to the translation. (I assume this is part of what you meant above.) Perhaps we need to colorize patterns used in the OOA for the translation. > My own approach, when I see recurring patterns, is to think "hmm, > something is repeating itself: that suggests commonality: how can I > abstract in into a domain?". The bridge to this new domain is then a > description of how to apply the pattern thats described in that domain. This seems tough to do in a typical design pattern situation. The subject matter of the pattern (i.e., the specific objects) may be quite different; it is usually only the nature of the interactions that the pattern describes in a generic way. For instance, in the Proxy pattern a new object is introduced as a proxy or surrogate for an already existing subject object. I don't see this sort of thing being abstracted into another domain if it shows up in a couple of different places in the model with different client and subject objects in each case. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Dearing... > I am trying to invent as few new wheels as possible and I wondered if > patterns only existed in the minds of clever experienced software > engineers, or if the patterns that are documented in the book by the > "Gang of four" or elsewhere had been successfully used by anyone. In at least one sense they do exist in the minds of experienced engineers. For a pattern to be useful it has to be reusable and it has to be described in a way that makes it recognizable when one stumbles into the appropriate situations. Recognizing that it is reusable and documenting it correctly requires significant experience. Books like the GOF book can describe patterns properly but they don't get inscribed in your memory until you actually use them a bit. It is one thing to read the theory and quite another to practice it. It takes awhile to get the key characteristics sufficiently ingrained for pattern recognition (to coin a phrase). So I would not expect to get any big savings up front. OTOH, one has to start somewhere, so it can't do any harm to at least keep you eye out for potential patterns. I certainly feel that there is a lot of opportunity for using patterns in S-M development, so building the necessary experience is probably a good idea. For example, in my response to Whipp I needed an example of a pattern to make a point, so I grabbed the GOF book and the second one I looked at (Proxy) was what I needed. However, as I scanned the pattern, it occurred to me that this sort of thing shows up with fair regularity in our software because we have three places where data lives (hardware RAM, computer memory, and disk) and this is relevant when the user debugs by changing pin states in RAM on the fly and eventually wants to save them to back disk. So we have been using the pattern without knowing it and we might have saved some time the first time around if we had been aware of it. FWIW, I am not sure that the real value in using patterns lies in time savings, though. We came up with the Proxy pattern without knowing about it. Given the nature of the problem it is a fairly obvious way to approach it. If we had been aware of it, we might have saved some time by getting the details right before going to test but I'm not sure that would have been crucial. However, I think that using the pattern would have produced a more maintainable system because we would have documented its use for posterity and would have handled the details (e.g., object names) differently. This would have resulted in greater consistency across those places where we do this. That consistency is of significant value when the system starts getting large. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Jon Monroe writes to shlaer-mellor-users: -------------------------------------------------------------------- Lahman writes: > Responding to Whipp... [snip] > > My own approach, when I see recurring patterns, is to think "hmm, > > something is repeating itself: that suggests commonality: how can I > > abstract in into a domain?". The bridge to this new domain is then a > > description of how to apply the pattern thats described in that domain. > This seems tough to do in a typical design pattern situation. The subject > matter of the pattern (i.e., the specific objects) may be quite different; > it is usually only the nature of the interactions that the pattern > describes in a generic way. For instance, in the Proxy pattern a new > object is introduced as a proxy or surrogate for an already existing > subject object. I don't see this sort of thing being abstracted into > another domain if it shows up in a couple of different places in the model > with different client and subject objects in each case. I think there are (at least) two types of patterns: design (structural) and analysis. A design pattern is a way of organizing the constructs of a programming language (in this case, OOA) in a tried and true way that leads to a better (maintanable, readable, etc) built product. An analysis pattern is really the abstraction of a subject matter in way that multiple clients who require that same subject matter can reuse it without modification - in other words, domain analysis. I've seen many examples of design patterns in our models. For example, the Composite pattern from the Gang Of Four book appears in our information models from time to time. A good source of examples of IM patterns is the Project Technology Information Modeling course exercises. We have found that our best IM's look quite similar to the "Biomedical Treatment Facility" IM. We had the privilege of having Neil Lang as our instructor for that class. He spent part of the class showing us some of the patterns he had identified. Incidentally, most GOF patterns do not directly apply to S-M OOA because they focus on interfaces and behavior. The GOF patterns that seem to work best are ones that focus on objects and relationships (such as Composite). We've also seen design patterns appear in our state models. An example is the case where one instance sends an event to multiple instances of another object, and then waits for all of the instances to complete their threads before advancing to the next state. One pattern for solving this is for the originating instance to keep a counter of the number of replies it is expecting. Each of the other instances sends an event back when it has completed its thread. The originating instance decrements its counter until it reaches zero, and then generates an event to itself to proceed on its thread. I don't have a cute name for this pattern, but we use it everywhere. There is a book out now called "Analysis Patterns : Reusable Object Models" by Martin Fowler, Addison-Wesley, ISBN 0201895420. It uses a Booch-style approach to identifying objects (as opposed to S-M), and identifies several "patterns" for health care and financial markets. Examples of his financial patterns are Portfolio, Quote, and Scenario. He has identified narrow subject matters that may be required by several different types of applications, and created analysis models for them. I think this is the type of pattern suggested by Whipp above when he looks for commonality and attempts to abstract it into new domains. Jonathan Monroe Abbott Laboratories - Diagnostics Division North Chicago, IL monroej@ema.abbott.com "Young, Roderick A" writes to shlaer-mellor-users: -------------------------------------------------------------------- Here are a couple of articles that may help. Both are listed as articles on the Project Technology Web page. The first article is best read in conjunction with the rest of the articles in the magazine. It is devoted to design patterns and it helps to put the Shlaer-Mellor article in the context of other thoughts in the area of design patterns. As I recall, Steve Mellor also contributed to the series introduction. Recursive Design of an Application-Independent Architecture IEEE Software, January, 1997 Object-Pattern Model Libraries Help Push Projects to the Starting Gate, ELECTRONIC DESIGN January 26, 1998 Rod Young Lockheed Martin > ---------- > From: Daniel Dearing[SMTP:DSD@plextek.co.uk] > Reply To: shlaer-mellor-users@projtech.com > Sent: Tuesday, December 08, 1998 10:45 AM > To: shlaer-mellor-users@projtech.com > Subject: Re: (SMU) Design Patterns > > Daniel Dearing writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > David, (and everyone else who responded) > > Thanks for the response(s). > > By way of some background information, my interest in "design patterns" > is due to the fact that I am in the process of introducing an OO method > and associated CASE tools into our organisation, and I was wondering if > the use of "design patterns" would gain our team (with limited SM > experience) some leverage in the early phases of the project. > > From your email below, you seem to be saying that there are "design" > patterns which can be considered as translators (or bits of translators) > and there are "analysis" patterns which are recurring analysis model > patterns. If I have understood you correctly, I think what I was really > asking about was Analysis patterns (or both). > > When I started doing SM development, I thought that every application > would be entirely different from all others. But now, after being in the > industry for several years, I just can't help thinking that people must > have modelled things like Repository domains and GUI domains and network > management domains etc that must be present in many systems and hence, > as you suggest, common patterns could be abstracted. > > I am trying to invent as few new wheels as possible and I wondered if > patterns only existed in the minds of clever experienced software > engineers, or if the patterns that are documented in the book by the > "Gang of four" or elsewhere had been successfully used by anyone. > > Best Regards, Daniel Dearing > > > > >>> Dave Whipp 08/12/98 16:26:37 >>> > Dave Whipp writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > lahman wrote: > > > I have no idea why S-M people seem reluctant to embrace design > patterns. My > > speculation is that there are no champions on the forum as there are > on > > OTUG. In our shop it is simply that no one has taken the time to > learn > > enough about them to make a proper evaluation -- I am the only one who > knows > > what they are and my knowledge stems solely from scanning the GOF > book. > > I'm not sure that its a reluctance. Its a matter of relevance. > > If I am constructing a translator then I will often dscribe its features > in terms of design patterns. For example, a wormhole-bridge is > generating > according to the adaptor pattern; and subtyping will often use the state > pattern. > > However, if I'm defining a translator, am I really using design > patterns? > Sure, I'm using the class structures: but design patterns are really > concerned with describing and resolving forces. Is a set of translation > rules a pattern language? I'd be inclined to say no, though the content > is pretty similar. > > Analysis patterns are a different matter. Certain patterns do repeat > themselves in analysis models. Leon Starr's book describes a few: not in > the form of a pattern language; but in a way that does present the > forces surrounding them. So, although he doesn't call them patterns, > that's what they are. > > My own approach, when I see recurring patterns, is to think "hmm, > something is repeating itself: that suggests commonality: how can I > abstract in into a domain?". The bridge to this new domain is then a > description of how to apply the pattern thats described in that domain. > > So, the OOD camp place more emphesis on design patterns, because they > need to use them. If you're doing SM, then you write a translator > instead of a pattern language. That said, there are always some > commonalities that I choose not to abstract into a formal domain. > > > Dave. > > -- > Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany > mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 > Opinions are my own. Factual statements may be incorrect. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > smf@cix.compulink.co.uk (Mike Finn) writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Jon Monroe... > We've also seen design patterns appear in our state models. An example is > the case where one instance sends an event to multiple instances of another > object, and then waits for all of the instances to complete their threads > before advancing to the next state. One pattern for solving this is for the > originating instance to keep a counter of the number of replies it is > expecting. Each of the other instances sends an event back when it has > completed its thread. The originating instance decrements its counter until > it reaches zero, and then generates an event to itself to proceed on its > thread. I don't have a cute name for this pattern, but we use it > everywhere. Until a few years ago I was using the same approach. As you know, it has the drawback of requiring an extra counter attribute be added to the sending object on the IM (or otherwise floating about in the OOA model). One day I decided to work out why I needed to synchronize these objects together with events. When I followed the threads of control down to the implementation domain I found that it could not handle repeated requests to perform the same operation; without the OOA model being required to process intervening events. To solve this problem I put an "object" in the bridge to the implementation domain. On being created the instance looks to see if there are any other like instances. If there are none it moves to the next state and invokes the operation, if there are it does nothing. When the operation has finished the final state looks to see if there exists any other like instance. If there is it sends an event to it to move it to the state where it can invoke the operation, if not it does nothing. Might this be another (familiar) State Model pattern? > There is a book out now called "Analysis Patterns : Reusable Object Models" > by Martin Fowler, Addison-Wesley, ISBN 0201895420. Unfortunately, I don't know much about Design Patterns but would like to know more. Is the above book a good place to start? Mike -- Mike Finn Dark Matter | Email: smf@cix.co.uk Systems Ltd | Voice: +44 (0) 1483 755145 Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- Jon Monroe wrote: > > I've seen many examples of design patterns in our models. [...]. An example is the > case where one instance sends an event to multiple instances of another object, and > then waits for all of the instances to complete their threads before advancing to > the next state. One pattern for solving this is for the originating instance to keep > a counter of the number of replies it is expecting. Each of the other instances > sends an event back when it has completed its thread. The originating instance > decrements its counter until it reaches zero, and then generates an event to itself > to proceed on its thread. I don't have a cute name for this pattern, but we use it > everywhere. Anytime I see this type of "design" pattern in the OOA, I know that there is design pollution. You have a concept (send events - wait for all consequent actions to complete); but you have to describe it as a design. Faced with a well defined problem, my inclination is to tweak the meta-model. This tweaking should not break other parts of the model. One advantage of putting threads into the meta model is that a thread can terminate by simply not sending an event from an action (or even by reaching an "ignore" entry in the STT). This can greatly simplify state models in domains that use sychronised threads. This type of solution will, of course, break existing simulators and translators. I don't use CASE tools at the moment; and I don't beleive in general purpose translators. So I do not suffer these problems. I do believe that the construction of an appropriate meta-model is necessary in order to acheive the simplest possible models of a domain. Dave. -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- Mike Finn wrote: > To solve this problem I put an "object" in the bridge to the > implementation domain. On being created the instance looks to see > if there are any other like instances. If there are none it moves > to the next state and invokes the operation, if there are it does > nothing. When the operation has finished the final state looks to > see if there exists any other like instance. If there is it sends > an event to it to move it to the state where it can invoke the > operation, if not it does nothing. > > Might this be another (familiar) State Model pattern? Its definitely a common pattern. I think "Buffer" would be the appropriate name for it. However, some people would argue that the buffer belongs in a domain, not the bridge (Bridges contain no state). One problem that you should cosider with this approach is the issue of the consistant data set. Lets say that two of these buffer objects are created at exactly the same time. One of 3 things can happen: 1. both instances see that the other instance exist; so both do nothing 2. Neither object sees that the other exists; so both go to their next state 3. It works correctly. To ensure that (3) happens, you need a bit more synchronisation (One additional object is all that is needed). > > There is a book out now called "Analysis Patterns : Reusable Object Models" > > by Martin Fowler, Addison-Wesley, ISBN 0201895420. > > Unfortunately, I don't know much about Design Patterns but would > like to know more. Is the above book a good place to start? Martin's a good writer. I haven't read this particular book, but its probably quite good. You could also try GoF. Another good book is the Anitpatterns book (do a search at Amazon). Dave. -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Monroe... > I think there are (at least) two types of patterns: design (structural) and analysis. A design pattern is a way of organizing the constructs of a programming language (in this case, OOA) in a tried and true way that leads to a better (maintanable, readable, etc) built product. An analysis pattern is really the abstraction of a subject matter in way that multiple clients who require that same subject matter can reuse it without modification - in other words, domain analysis. No argument here. This view seems consistent with the S-M approach to subject matters, levels of abstraction, etc. I was referring to the inferred implication that Whipp was looking for ways to move patterns in general to another domain. I was citing why this doesn't work for your design patterns. I was also assuming that the inspection was done after domain analysis when your analysis patterns would be moved (i.e., all I would be looking for in the domain would be design patterns). Of course that assumption was pretty implicit because I wasn't making a distinction between design and analysis patterns. B-) > Incidentally, most GOF patterns do not directly apply to S-M OOA because they focus on interfaces and behavior. The GOF patterns that seem to work best are ones that focus on objects and relationships (such as Composite). True. That was what I was alluding to in suggesting that we might need to use colorization to convey that information to the translation. A lot of the GOF patterns would show up trivially in the IM but the interesting parts would be in the implementation. > We've also seen design patterns appear in our state models. An example is the case where one instance sends an event to multiple instances of another object, and then waits for all of the instances to complete their threads before advancing to the next state. One pattern for solving this is for the originating instance to keep a counter of the number of replies it is expecting. Each of the other instances sends an event back when it has completed its thread. The originating instance decrements its counter until it reaches zero, and then generates an event to itself to proceed on its thread. I don't have a cute name for this pattern, but we use it everywhere. I hadn't thought about this, but now that you mention it... This presents a variation on the point above in that design patterns may also span different diagrams in the S-M notation. In this example the pattern appears in the IM, SM, and OCM. This brings up an interesting question: if I use a design pattern, is it important to document that clearly? If you are using UML you can usually point to a class diagram and say, "Aha, they've used the Reverse Bathcoup Pattern!". In your example this would probably be pretty clear looking at the SM but in other cases it might not be so obvious that a pattern was used at all. And if most of the pattern was executed in executed in the implementation, things are even less clear. I believe it is valuable for posterity to understand what the original analyst was thinking. This could be documented in the various object and relationship descriptions, so the real question is: would it be desirable to have a notational adornment to make the use of patterns in the OOA clear in the diagrams themselves? -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > Anytime I see this type of "design" pattern in the OOA, I know that there is > design pollution. You have a concept (send events - wait for all consequent > actions to complete); but you have to describe it as a design. I guess I don't understand what you mean by "design pollution". A particular instance must transition to a new state when and only when processing elsewhere is completed. This seems to me to be a problem space issue that must be resolved in the OOA design. Is your objection that false alarms are sent so that the instance itself must determine when external processing is complete? That is, completion of processing should be determined elsewhere and a single notification event should be sent to the instance to cause its transition when processing is truly completed. > Faced with a well defined problem, my inclination is to tweak the > meta-model. This seems to me to be an extension of the methodology! -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Brad_Appleton-GBDA001@email.mot.com writes to shlaer-mellor-users: -------------------------------------------------------------------- Mike Finn writes: > Unfortunately, I don't know much about Design Patterns but would > like to know more. Is the above book a good place to start? I would humbly suggest the following online hypertext introduction, which is freely available from my website: "Patterns and Software: Essential Concepts & Terminology" http://www.enteract.com/~bradapp/docs/patterns-intro.html (replace ".html" with ".pdf" for the PDF format version) This is a paper that tries to thoroughly introduce all the common "software patterns" concepts and terms and origins and popular books and has loads of hyperlink to other resources for seeking out further information. It was originally created for the purpose of helping new members of a local patterns-reading group come up to speed quickly so they could understand and participate in the conversation. It also mentions many different kinds of software patterns, including patterns about design, analysis, process/organization, and even configuration management. For the quick 10-minute overview without all the hyperlinks and prose, I have some slides for a 10-minute patterns-intro I had to give as part of another presentation. "Patterns in A Nutshell: The bare essentials of software patterns" http://www.enteract.com/~bradapp/docs/patterns-nutshell.html (replace ".html" with ".pdf" for the PDF format version) These should be more than adequate starting points. Cheers! -- Brad Appleton | http://www.enteract.com/~bradapp/ "And miles to go before I sleep." | 3700+ WWW links on CS & Sw-Eng Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > > Anytime I see this type of "design" pattern in the OOA, I know that there is > > design pollution. You have a concept (send events - wait for all consequent > > actions to complete); but you have to describe it as a design. > > I guess I don't understand what you mean by "design pollution". A particular > instance must transition to a new state when and only when processing elsewhere > is completed. This seems to me to be a problem space issue that must be > resolved in the OOA design. > > Is your objection that false alarms are sent so that the instance itself must > determine when external processing is complete? That is, completion of > processing should be determined elsewhere and a single notification event should > be sent to the instance to cause its transition when processing is truly > completed. It isn't the false-alarms themselves. The problem is that I've had to invent an algorithm (count events) to describe a problem-domain concept. But my architecture may have its own synchronisation mechanisms. My translator therefore has to first abstract the problem-domain concept from my model; and then re-implement it using the architectural mechanism. Otherwise the design will carry over the counter/flags from the model. There is no reason why sycnchronisation should be described in terms of an event counter. That is purely an invented concept. When I have to invent an algorithm to describe a feature of the problem domain, I am doing design. An SM model should not contain design, so I use the term 'design pollution'. > > Faced with a well defined problem, my inclination is to tweak the > > meta-model > > This seems to me to be an extension of the methodology! My attachment to OOA is not as strong as my attachment to RD. RD is a process that, in each step, maps the population of one meta-model onto another. I don't see anything special about OOA as a starting point. If I am able to abstract patterns out of an OOA model, then I will do so: and create a new meta-model as my starting point. Of course, I'll stick with OOA if I can (Occam's razor); and I'll use wormholes in preference to meta-modelling. But enhancing the meta-model is too a useful a tool to discard just because its not pure SM. The OIM is generally pretty sound. I may want to define semantics for (M) attributes; but, in general, it can be used to describe many other meta-models. The behavioural models in OOA are not, IMHO, as useful. Whereas the OIM is a normalised model, the behavioural models are meerly minimal. Dave. -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. Allen Theobald writes to shlaer-mellor-users: -------------------------------------------------------------------- Way back Theobald asked... >> If I have two domains: Ui and App. And they are connected >> via a bridge or wormhole. How is the code for bridge usally >> realized? To which Lahman responded... "Each domain implements a set of synchronous services corresponding to the bridge. Each synchronous service represents a different worm hole (i.e., a distinct communication). To do this in C++ you might have a different Bridge class associated each domain and they talk to one another...this approach is rather strongly suggested by the recent wormholes paper from PT." So, naturally I pondered this for a while , then decide to try it by hand with a simple (non-working fictitious) app. I want to set the RF operating channel of a radio, by going through the parallel port of a PC to a peripheral device. The value range for the RF operating channel is 0-259. This value will be passed via the command-line using -rNumber (i.e., -r200 set it to 200). The Ui<->App bridge seems like an ideal candidate to enforce this requirement! The peripheral device will ACK or NACK via the parallel port. No timeouts! Trying to keep it simple. Below is a first-cut. Ignore the details (ha!) of code. I just want to know how the pieces (Ui, App, Ui<->App bridge) are constructed and fit together (see "Now What" comment!). Happy Holidays, Allen ----------8<----------cut here----------8<---------- class CUiBridge { void SetRfOperChan(int& rfOperChan) { if ( m_app.SetRfOperChan(rfOperChan) ) cout << "Query RF successfull"; else cout << "Query RF unsuccessfull"; } CAppBridge m_app; }; class CAppBridge { public: bool SetRfOperChan(u_int16 rfOperChan=0) { if ( (rfOperChan < 0) || (rfOperChan > 259) ) return false; // Now what? } }; void main(int argc, char *argv[]) { CUiBridge g_uiBridge; if ( /* command-line switch -r200 */ ) { g_uiBridge.SetRfOperChan(200); } } lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > It isn't the false-alarms themselves. The problem is that I've had to invent > an algorithm (count events) to describe a problem-domain concept. But my > architecture may have its own synchronisation mechanisms. My translator > therefore has to first abstract the problem-domain concept from my model; > and > then re-implement it using the architectural mechanism. Otherwise the design > will carry over the counter/flags from the model. > > There is no reason why sycnchronisation should be described in terms of an > event counter. That is purely an invented concept. When I have to invent an > algorithm to describe a feature of the problem domain, I am doing design. An > SM model should not contain design, so I use the term 'design pollution'. But if I carry this argument to an extreme, isn't everything in the OOA an invented concept? The OOA is describing a new, unique solution to a particular problem. I think that if the logical solution expressed by the OOA requires an explicit synchronization mechanism, that is coincidental to the fact that the architecture has its own implementation synchronization mechanisms. In particular, I do not see why the translation would re-implement the synchronization. The translation should not even know that synchronization was the logical goal of the way the analyst put together the model, so it shouldn't do anything out of the ordinary -- it should simply translate the events, attribute writes, etc. as it would any others. It also seems to me that adding a thread to the meta model to resolve a particular problem space issue simply replaces the OOA "design pollution" with a more general one in the meta model. The thread is an invented concept at a higher level and it still has to be explicitly invoked in the OOA to solve the problem. > My attachment to OOA is not as strong as my attachment to RD. RD is a process that, > in each step, maps the population of one meta-model onto another. I don't see > anything special about OOA as a starting point. I've mellowed in my dotage and I tend to agree with you. So long as the OOA meta model describes a useful suite of OO constructs it shouldn't matter. Of course one could debate what constitutes a useful set of OO constructs. B-) -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com "Keith Nicholas" writes to shlaer-mellor-users: -------------------------------------------------------------------- Hi... > > > There is a book out now called "Analysis Patterns : Reusable > Object Models" > > by Martin Fowler, Addison-Wesley, ISBN 0201895420. > > Unfortunately, I don't know much about Design Patterns but would > like to know more. Is the above book a good place to start? > It's a good book. Great examples of good abstractions. I'm not sure if its the best book for "patterns" in general, but is still a good book on its own. I think, from my limited knowledge of SM, it is possibly more uself than GOF. However, anyone serious about learning what patterns are about should read GOF, and hunt the net. Keith Sam Walker writes to shlaer-mellor-users: -------------------------------------------------------------------- Daniel, I would just like to point out that ESMUG is a good source of analysis patterns. Since I have subscribed I have noticed many SM analysis patterns pop up. Modelling questions are asked frequently in ESMUG, and typically there ends up being a generally agreed upon way to model a particular problem. It is a shame that these analysis/modelling patterns have not been published (to my knowledge), as they would be incredibly useful for new SM users. Having said this, I could argue that each problem is unique i.e. In perfect analysis world ... one problem should map exactly to one model, and vice versa e.g. |Problem|<------------>|Model| 'one model captures one problem.' But using analysis patterns would tend to suggest, |Problems|<<----------->|Model| 'one model captures many problems.' This would suggest that analysis/modelling patterns should not be used as a final model. Instead, they can be used as extremely useful starting points from which a model can converge from. This way the models tend to converge a lot quicker. Sam Walker. "Lynch, Chris D. SDX" writes to shlaer-mellor-users: -------------------------------------------------------------------- If you take the view that it is only packaging which separates a "design pattern" from a focussed list of hints at solving a certain class of problem, I submit that the PT books, "Modeling the World in Data" and "Modeling the World in States" (both by Shlaer and Mellor) are good sources of "design patterns" for Shlaer-Mellor modelling. In particular, the tradeoff analysis of when to formalize an object or its lifecycle come to mind. Also helpful, as has been pointed out, are ESMUG discussions of particular real-world models which were successful. Another thought on this is the idea of "antipatterns", i.e., modeling ideas which led to spectacular failures. I suspect all experienced analysts could share a wealth of these. -Chris ------------------------------------------- Chris Lynch Abbott Ambulatory Infusion Systems San Diego, Ca LYNCHCD@HPD.ABBOTT.COM ------------------------------------------- Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > Responding to Whipp... > > There is no reason why sycnchronisation should be described in terms of an > > event counter. That is purely an invented concept. When I have to invent an > > algorithm to describe a feature of the problem domain, I am doing design. An > > SM model should not contain design, so I use the term 'design pollution'. > > But if I carry this argument to an extreme, isn't everything in the OOA an invented > concept? The OOA is describing a new, unique solution to a particular problem. If you take anything to extremes, you tend to get silly effects. It is more useful to look at the semantic gap between the concept you want to express and the way you have to express it. The problem domain has no concept of a synchronisation counter as a property of the object doing the synchronisation. It is an invented attribute. > I think that if the logical solution expressed by the OOA requires an explicit > synchronization mechanism, that is coincidental to the fact that the architecture has > its own implementation synchronization mechanisms. In particular, I do not see why > the translation would re-implement the synchronization. The translation should not > even know that synchronization was the logical goal of the way the analyst put > together the model, so it shouldn't do anything out of the ordinary -- it should > simply translate the events, attribute writes, etc. as it would any others. Let me give a very specific example (One I pondered long and hard before concluding that an additional process-type was needed in the meta-model). The problem can be simplified to "Tell all input pins to update their value; and tell me when that's done". The implementation happens to be that Each input pin value is stored in a flip-flop; which has an enable input and a clock input. To "update the value", all I need to do is assert the enable input(s) for 1 clock cycle. The sychronisation is that clock cycle. (Please accept that there are good reasons for not simply using a write accessor) To move from a counter-based description to a clock-sychronised implementation is a non-trivial mapping. It requires the translator you work out (or be told) that the counter is a sychroniser; and all the "I'm done" events don't exist. Its not impossible, but its the sort of thing that adds unnecessary complexity. (If my meta-model has a sychronisation concept, then it is trivial for a translator to _add_ a counter for a software implementation, if needed.) My addition to the meta-model, as seen by the OOA, is very simple. It is an extention to the delayed event mechanism in OOA96. Where OOA96 has "Generate C1 after 20 ns", I would have "Generate C1 after synch P1", which means "Generate P1 to all specified instances of Pin: track consequent threads. When all threads have died, generate C1 to specified Channel(s)". > The translation should not even know that [XXX] was the logical goal... Yes, it should. The reason why human designers are better than automated ones is that humans know the intent. To my mind, the purpose of the meta-model is to define a framework of intention. The skill in designing a meta-model is to work out which aspects of intent can be abstracted into the meta-model; and which can remain (unformalised) in its population. Consider "The cat has black fur". The meta model abstracts the intent that "we want to define a property of a thing". The fact that the thing is a cat (etc.) is not considered important; so a translator should not be expected to know what a cat is. If a translator did need to know what a cat is, then the cat must become a concept in a meta-model. A translator must never be given an understanding of concepts that are not in its meta-model. This might seem a rather extreme statement, but I have found that ignoring it leads to maintenance problems. > It also seems to me that adding a thread to the meta model to resolve a > particular problem space issue simply replaces the OOA "design pollution" > with a more general one in the meta model. The thread is an invented concept > at a higher level and it still has to be explicitly invoked in the OOA to > solve the problem. Defining a concept in the meta-model is not design-pollution, because we don't describe how to implement the concept - we just decribe its effects in the meta-model. Dynamic descriptions in the meta-model define how to simulate, not how to implement. The translator will decide how to map the (static) concept onto the architecture. Obviously the concept must be used from the OOA; but, again, its use is a static fact that a translator can utilise. Dave. -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > > > But if I carry this argument to an extreme, isn't everything in the OOA an invented > > concept? The OOA is describing a new, unique solution to a particular problem. > > If you take anything to extremes, you tend to get silly effects. It is more > useful to look at the semantic gap between the concept you want to express > and the way you have to express it. The problem domain has no concept of a > synchronisation counter as a property of the object doing the synchronisation. > It is an invented attribute. But the problem domain does have a concept of synchronization that is at least implicit in the constraints on the order of processing. S-M dictates that all significant processing be done via FSMs. This limits the choices for describing that synchronization. Counting events seems like one of those choices describing that synchronization in the OOA. I understand you reluctance to put in the attribute -- its basically the same one I have to your (M) attributes. I would certainly look for another way to model the processing so that only a single event was issued to trigger the transition. But often one ends up just moving the counter to a different object. I see this problem as being endemic to handling flow of control via events between state machines. Given this, I see the inclusion of the attribute as a lesser evil to modifying the meta model. [Note that my alternative to the attributes in our (M) discussion involved a modification to a notation that was already being added to the meta model.] > The problem can be simplified to "Tell all input pins to update their > value; and tell me when that's done". The implementation happens to be > that Each input pin value is stored in a flip-flop; which has an enable > input and a clock input. To "update the value", all I need to do is > assert the enable input(s) for 1 clock cycle. The sychronisation is that > clock cycle. (Please accept that there are good reasons for not simply > using a write accessor) > > To move from a counter-based description to a clock-sychronised implementation > is a non-trivial mapping. It requires the translator you work out (or be > told) that the counter is a sychroniser; and all the "I'm done" events don't > exist. Its not impossible, but its the sort of thing that adds unnecessary > complexity. My problem with this is example is that it seems to be mixing levels of abstraction (subject matters) in the domain. In the domain where "Tell all the inputs..." is relevant, flops and clock cycles are not relevant abstractions. I would do this over an asynchronous bridge by sending an update event to the wormhole and waiting for the confirmation event. In this domain the counter is still a valid description of the synchronization and requires no special translation treatment. Let's assume that there is no second software domain to capture flops and clocks so the hardware read/writes are done in the bridge directly. The bridge converts the update event to an assert. The bridge then waits until the next clock edge and generates an event back to the domain (leaving the detection of the clock edge as an exercise for the implementation). The clock synchronization is captured in the bridge specification rather than the translation doing handstands to map a special interpretation of the OOA. Now let's assume there is a PIO domain that does model flops and clocks. It receives an event from the bridge for the update. Since it knows about clocks and whatnot, presumably it can send back an event to the bridge on the clock cycle following the assert to confirm. In this domain there is no synchronization requiring a counter because the processing for each bridge input is independent. Again, the translation does its thing without special knowledge of the synchronization in the client domain. > > The translation should not even know that [XXX] was the logical goal... > > Yes, it should. The reason why human designers are better than automated > ones is that humans know the intent. To my mind, the purpose of the > meta-model is to define a framework of intention. The skill in designing > a meta-model is to work out which aspects of intent can be abstracted into > the meta-model; and which can remain (unformalised) in its population. > > Consider "The cat has black fur". The meta model abstracts the intent that > "we want to define a property of a thing". The fact that the thing is a > cat (etc.) is not considered important; so a translator should not be > expected to know what a cat is. > > If a translator did need to know what a cat is, then the cat must become > a concept in a meta-model. A translator must never be given an understanding > of concepts that are not in its meta-model. This might seem a rather > extreme statement, but I have found that ignoring it leads to maintenance > problems. True, but I think that when the translation has to understand the problem semantics of the model it is a signal that there is something wrong with the domain subject matter. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Theobald... > I want to set the RF operating channel of a radio, by going through > the parallel port of a PC to a peripheral device. The value range for > the RF operating channel is 0-259. This value will be passed via the > command-line using -rNumber (i.e., -r200 set it to 200). > > The Ui<->App bridge seems like an ideal candidate to enforce this > requirement! At the risk of being picky, typically one has Ui<->App<->Pio, where the user interacts with the user interface (Ui domain) and the request is forwarded to the main application (App domain), which dispatches it to the appropriate service domain -- physical I/O (Pio domain) in this case. Ideally the highest level application domain on the domain chart is pretty simple -- basically just a dispatcher and coordinator for various services. But I'll use your nomenclature below. > The peripheral device will ACK or NACK via the parallel port. No > timeouts! Trying to keep it simple. > > Below is a first-cut. Ignore the details (ha!) of code. I just want > to know how the pieces (Ui, App, Ui<->App bridge) are constructed and > fit together (see "Now What" comment!). > class CUiBridge > { > void SetRfOperChan(int& rfOperChan) > { > if ( m_app.SetRfOperChan(rfOperChan) ) > cout << "Query RF successfull"; > else > cout << "Query RF unsuccessfull"; > } > > CAppBridge m_app; > }; > > class CAppBridge > { > public: > bool SetRfOperChan(u_int16 rfOperChan=0) > { > if ( (rfOperChan < 0) || (rfOperChan > 259) ) > return false; > > // Now what? CAppBridge needs the appropriate header files for whatever entities in the App domain will actually handle the bridge requests, in this case the operation of setting the channel. If this is a complex operation that involves state machine processing within the App domain, SetRfOperChan (from CAppBridge) would insert an event onto the domain's event queue. [The architecture will provide an event queue and an interface that gets invoked from SetRfOperChan. Once the event is on the queue, SetRfOperChan's job is done and it returns. The domain's queue manager will eventually process the event.] If it is a simple hardware write, then the SetRfOperChan routine would probably do the write directly by mapping the correct register address (via the header file) and writing the value to it. For a lot of real hardware applications the Pio domain is realized because it doesn't have complex operations -- it simply reads/writes registers. In these situations the CAppBridge _is_ the domain because it simply does hardware reads/writes. In more complex cases where values are split across registers, scaled, stored for simulation purposes, etc. you might have a true OOA domain with objects and state machines. > > } > }; > > void main(int argc, char *argv[]) > { > CUiBridge g_uiBridge; > > if ( /* command-line switch -r200 */ ) { > g_uiBridge.SetRfOperChan(200); > } > } I assume you left the rest of the initialization (e.g., the objects in the Ui and App domains) out to keep things simple. Usually g_uiBridge would be created as part of the Ui domain initialization -- it is just one more piece of the Ui domain. Note that this is only one way to do it. One could have a separate class for the bridge itself, say CBridge, that contained a routine, say UiDoit_1, that invoked the SetRfOperChan from CAppBridge. The SetRfOperChan function in CUiBridge would invoke UiDoit_1. The advantage of this approach is that CUiBridge and CAppBridge do not have to be modified if a domain is swapped. Because of the naming conventions one can construct CBridge to contain the relevant XXDoit_n interfaces for a particular application and only CBridge has to have both domain's headers. If a domain is swapped in the application, only CBridge needs to be modified. The disadvantage, of course, is the extra level of indirection for the bridge calls through CBridge. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > Let's assume that there is no second software domain to capture flops and clocks > so the hardware read/writes are done in the bridge directly. The bridge converts > the update event to an assert. The bridge then waits until the next clock edge > and generates an event back to the domain (leaving the detection of the clock edge > as an exercise for the implementation). The clock synchronization is captured in > the bridge specification rather than the translation doing handstands to map a > special interpretation of the OOA. All these bridges and sychronisation events get very complex. My implementation is a basic datapath+control architecture; and the control state machines are driven by the clock (Three's not even a "first" software domain, let alone a "second"). In the implementation, the sychronisation is simply an unconditional transition to a next state. If I introduce protocols; either as events counters, or wormholes and another domain; then I have to eliminate them during translation. Hardware is not like software, I can't call a function to abstract something. Every complexity has a cost in area, speed or power. Its far easier to eliminate things if the intent is defined in the meta model, than if they are designed in the OOA. Mapping events onto anonymous clock edges isn't difficult; but reducing a counter to a single clock edge is a bit more tricky. I know I'm worrying about implementation, and that I should ignore it; but there is no point in constructing a model that's more complex than it needs to be, just so I can have the fun of implementing a high-complexity translator. I am using implementation to illustrate the problems caused by the model- complexity; but the complexity is easily identified in the model itself. Just listen to the number of people who say "I use this idiom to model this concept". Such an idiom is strongly suggestive of an application- independent abstraction. My issue isn't that I can't model the situation using plain OOA. I clearly could (and, in the past, have). But, when I model something, I should be exploring the problem domain, not devising clever ways forcing it into an inappropriate meta-model. To bring this back on-topic: I see the meta-model as the place to define analysis patterns. These can then be refered to from the model. The semantics of the patterns can then be embedded in the translator. (Note: *design* patterns are *not* included in the meta-model: they are defined in the architecture ... which is the meta-model for the next stage of translation) Dave. -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. smf@cix.compulink.co.uk (Mike Finn) writes to shlaer-mellor-users: -------------------------------------------------------------------- Another go at responding to Jon Monroe... > We've also seen design patterns appear in our state models. An example is > the case where one instance sends an event to multiple instances of another > object, and then waits for all of the instances to complete their threads > before advancing to the next state. One pattern for solving this is for the > originating instance to keep a counter of the number of replies it is > expecting. Each of the other instances sends an event back when it has > completed its thread. The originating instance decrements its counter until > it reaches zero, and then generates an event to itself to proceed on its > thread. I don't have a cute name for this pattern, but we use it > everywhere. I've just remembered a very powerful, but probably *proscribed*, technique (that I still use) to solve this problem. It's called Warping (OK, I just gave it a name!). The nice thing about this technique is that you don't have to worry about waiting for all of the instances to complete their threads before advancing to the next state. You just assume it's going to be alright. :-) Basically, after the originating instance has sent all the events to the other object, a synchronous call to the architecture is made which has the effect of suspending the current state action and processing the next event on the queue. In fact, it's just the same call that starts the architecture processing an external event. When you come out of Warp (as it were) all the threads have completed and it's safe to generate the event that moves the originating instance to the next state. Of course, all this assumes a number of things about your architecture, the least of which is that it must be re-entrant. BTW, many thanks to Brad, Dave and Keith for their Design Patterns info. Mike -- Mike Finn Dark Matter | Email: smf@cix.co.uk Systems Ltd | Voice: +44 (0) 1483 755145 lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... Regarding the bridge talking directly to hardware: > All these bridges and sychronisation events get very complex. My implementation > is a basic datapath+control architecture; and the control state machines are > driven by the clock (Three's not even a "first" software domain, let alone a > "second"). In the implementation, the sychronisation is simply an unconditional > transition to a next state. If I introduce protocols; either as events counters, > or wormholes and another domain; then I have to eliminate them during > translation. I know you have a fixation on translation and architectures, but this seems a tad extreme. It sounds like you are eliminating the OOA and translating directly from a statement of requirements. If you do have an OOA domain where sending events to Pins and waiting for them to complete processing is a problem space constraint, then in my solution you do not have to eliminate the event counters. They are part of the domain translation just as any other write accessor, test or event generation would be. The translation should neither know nor care that the write/test/generate is keeping track of the processing of multiple events -- that is strictly a semantic issue of the OOA solution algorithm in that domain. If the bridge writes directly to hardware, then the bridge has to understand the hardware protocols and that is where you have to worry about knowing when a flop has completed the update. But at that point all the bridge cares about is that particular assert; it needn't know nor care whether 1 pin or 50 pins are being updated in an identical fashion as part of the domain's processing. > I know I'm worrying about implementation, and that I should ignore it; but > there is no point in constructing a model that's more complex than it needs > to be, just so I can have the fun of implementing a high-complexity translator. As I indicated before, my view is that using a counter is a natural outgrowth of having all high level flow of control described with event flows. SMALL and other action languages that are faithful to the ADFD paradigm tend to be clumsy for computational algorithm descriptions. I see the use of counters to track processing in an OOA as simply another example of wordiness that is an outgrowth of the notational paradigm. I have yet to be convinced that the translator needs any special complexity to handle this example. > To bring this back on-topic: I see the meta-model as the place to define > analysis patterns. These can then be refered to from the model. The semantics > of the patterns can then be embedded in the translator. (Note: *design* > patterns are *not* included in the meta-model: they are defined in the > architecture ... which is the meta-model for the next stage of translation) I understand that you want to abstract the pattern content into the meta model and the translator implementation so that the analyst doesn't need to worry about the pattern details in the OOA. My last point below is simply that this is not necessary. The other points are that in doing so, it may be more trouble than it is worth. While the idea is superficially appealing, I think the main problem lies in the descriptive nature of such patterns. The crucial information in design patterns lies in the natural language text that is only partially organized by the design pattern paradigm. That description of semantics is what makes them worthwhile and it is also what would make them very difficult to rigorously describe in a meta model -- existing "pattern languages" notwithstanding. Also, it is not clear how one would "refer" to them in the OOA. Such patterns describe the interactions of existing problem space objects that would be in the OOA already because they have other characteristics not defined by the pattern. I can envision schemes, such as a tagging system, where you tag object A as corresponding to the X component of pattern Z. You would probably have to identify which relationships corresponded to ones in the pattern as well. I think this might start to get kind of messy (i.e., coloration to ginger bread house extremes). I also have a hard time thinking about how things like conditional event generation in specific actions would be "referred to". These references would also have the effect of splitting the essential solution description apart in ways that I don't like. Unless I have the design pattern thoroughly memorized, I would have to split my attention from a given IM, OCM, STD, or ADFD to the meta model's pattern description to understand what's going on in that diagram. I see the design patterns as a tool for creating solutions in a consistent manner and avoiding design errors. But once the solution is created, I want to see it all at once at a given level of abstraction. Finally, I'm not sure what needs to be embedded in the translator. This point probably reflects much of our difference over the example previously. I believe that an analysis pattern would be represented in an OOA largely in the action descriptions and the events generated. By their nature analysis patterns are a semantic description of the problem space, albeit semantics that are common across domains and applications. As such they should be fully describable, albeit wordily, using conventional OOA constructs so that an unmodified translator can be used. For example, suppose I have an application that would benefit from using analysis pattern X but due to my parochial proclivities I am unaware of pattern X. I go ahead and create an OOA that solves my problem. By sheer dumb luck my solution exactly matches that prescribed by analysis pattern X. I submit that my OOA would look exactly like and would be translated in the same way as that of someone else who solved the same problem using analysis pattern X (provided they did not dink with their meta model). -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Finn... > Basically, after the originating instance has sent all the events to > the other object, a synchronous call to the architecture is made > which has the effect of suspending the current state action and > processing the next event on the queue. In fact, it's just the same > call that starts the architecture processing an external event. > > When you come out of Warp (as it were) all the threads have > completed and it's safe to generate the event that moves the > originating instance to the next state. Given that the processing may require the target to generate other events to complete the processing, how do you know when all threads have completed? I suppose you could wait until the event queue was empty, but that might take awhile if external events are also being plopped on the queue asynchronously. B-) -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com smf@cix.compulink.co.uk (Mike Finn) writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to lahman... > > When you come out of Warp (as it were) all the threads have > > completed and it's safe to generate the event that moves the > > originating instance to the next state. > Given that the processing may require the target to generate other events to > complete the processing, how do you know when all threads have completed? I > suppose you could wait until the event queue was empty, but that might take awhile > if external events are also being plopped on the queue asynchronously. B-) Yes, you're correct. Control returns, normally to the main program but if Warping to the action, once the internal event queue is empty. The architecture I'm working with is not required to deal with asynchronously delivered events, but if it had to I would probably create an external event queue. Mike -- Mike Finn Dark Matter | Email: smf@cix.co.uk Systems Ltd | Voice: +44 (0) 1483 755145 Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > I know you have a fixation on translation and architectures, but this seems a tad > extreme. It sounds like you are eliminating the OOA and translating directly > from a statement of requirements. If only that were possible. > If you do have an OOA domain where sending events to Pins and waiting for them to > complete processing is a problem space constraint, then in my solution you do not have > to eliminate the event counters. They are part of the domain translation just as any > other write accessor, test or event generation would be. The translation should neither > know nor care that the write/test/generate is keeping track of the processing of > multiple events -- that is strictly a semantic issue of the OOA solution algorithm in > that domain. The problem is that the OOA solution algorithm is inapproriate. There is a huge semantic gap between the description in the problem domain, the structure of the OOA algorithm and the structure in the implementation. There is a much smaller gap between the problem domain and the implementation than between the problem domain and the OOA. I think you are still not truely taking on board the fact that I am doing a pure- hardware implementation. A crutial difference between hardware and software is that hardware is massively parallel and massively sychronised. Sychronisation of single-cycle paths (or even constant-length paths) in hardware is a non-issue (at this stage in the design). A clock does all the synchronisation (You can think of a reply-event as a hold-time backannotation point; but it doesn't really help and, arguably, is wrong). A hardware implementation of an OOA attribute is, typically, a set of flip-flops (possibly a register file). If I want to sychronise 15 threads, then I would need 4 flip-flips to store the counter and 15 clock cycles to decrement the counter to zero. I could introduce combinatorial logic to reduce this to a single clock cycle; but the reality is that it is all completely pointless. A more clever translator would realise that, becuase there is a clock, the whole synchronisation problem can be solved with no flip flops, no clock cycles and no combinatorial logic. The batteries in your mobile phone will last longer with this solution. The example I gave was, unfortunately, related to an interface. I will agree the the problem can be moved to another domain (which I could translate manually). But if I apply this as a general solution that I end up with a lot of single-object domains. Domain pollution is generally thought of as mixing subject matters within a domain; but splitting a subject matter across several domains is, IMHO, equally undesirable. > I understand that you want to abstract the pattern content into the meta model > and the translator implementation so that the analyst doesn't need to worry > about the pattern details in the OOA. Close, but not quite. I want to [...] so that the analyst doesn't need to specify a specific algorithm that implements the pattern. If it really was pattern details that were being described, then it wouldn't be so bad. It would meerly be redundant. "Generate events A to start threads; generate event B when consequent threads are done" is not the same as "Generate events A; count number of generates. Wait until this number of reply events are recieived. Then generate event B". They might seem superficially similar; but the latter has a significant implementiation bias. It can also lead to quite complex OOA models (because of that implementation bias). > While the idea is superficially appealing, I think the main problem lies in the > descriptive nature of such patterns. The crucial information in design patterns > lies in the natural language text that is only partially organized by the design > pattern paradigm. That description of semantics is what makes them worthwhile > and it is also what would make them very difficult to rigorously describe in a > meta model -- existing "pattern languages" notwithstanding. When a pattern has been seen a number of times, it is possible to see how its use varies; and what they have in common. The commonalities need no parametarisation. The variations can be analysed to identify the properties that describe these variations. These can be described as an information model, and form the basis for a formal parametarisation of the pattern. > Also, it is not clear how one would "refer" to them in the OOA. I used a deliberately non-commital term. The mechanism of referal provides the means to pass the parameters. If the pattern is behavioural (such as synchronisation) then a process may be used. Other patterns need more extreme measures (For example, my DFD proposal is a formalisation of the pattern for updating an (M) attribute when its dependee attributes change). Take, for eample, the problem of a reset signal to a hardware component. The problem-domain says "when reset is asserted, the component returns to its initial state". This is a very simple concept but, in OOA, it can be very difficult to express it. You need to write a value to every attribute in the model. You need a transition from every state. You may need to migrate subtypes; delete/create objects, empty the event queue, etc. And it all needs to be sychronised. It is much easier to add information to the model that parametarises the reset concept. Each attribute needs a reset value; each state model needs a reset state. Plus a few more bits and pieces. Moving the concept into a (the) meta model allows the reset state to be defined as static information, not as a complex dynamic behaviour. In this case, the pattern is "refered" to by adding the necessary information; and then, at some point in the model, using a "reset" process/wormhole. (And, in practice, the reset config is already defined as the initial population of the model). > Such patterns describe the interactions of existing problem space objects that > would be in the OOA already because they have other characteristics not defined > by the pattern. I can envision schemes, such as a tagging system, where you tag > object A as corresponding to the X component of pattern Z. Tagging may work, though you need to be careful that it doesn't cause a maintenance nightmare. I prefer to think in terms of integrated concepts though. "Tag" implies something that is added; and is essentially the same as Coloration. (BTW, coloring the synchronisation counter is another way of conveying intent to the translator; but I'd like to define just the intent, without specifying a counter) > These references would also have the effect of splitting the essential solution > description apart in ways that I don't like. You propose that I split my model into 2 domains, connected by a bridge. We seem to be arguing about 2 different ways to decompose the problem. Both have their strengths and weaknesses. I don't see how splitting a domain into two, each time I want to sychronise a set of threads, is better than adding a new specialised event generator (plus its semantics). It may be purer SM, but that is only one element of "good". > For example, suppose I have an application that would benefit from using analysis > pattern X but due to my parochial proclivities I am unaware of pattern X. I go > ahead and create an OOA that solves my problem. By sheer dumb luck my solution > exactly matches that prescribed by analysis pattern X. I submit that my OOA would > look exactly like and would be translated in the same way as that of someone else > who solved the same problem using analysis pattern X (provided they did not dink > with their meta model). I'm not sure if this is relevant. If you don't "dink" with the meta model, then the OOA formalism constrains you to model a concept in one of a small number of ways. So it is quite likely that you will accidently use that same idiom as a person who already knows that idiom. And because they are both the same, they both have the same weaknesses (and strengths). Dave. -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Finn... > Yes, you're correct. Control returns, normally to the main program > but if Warping to the action, once the internal event queue is > empty. > > The architecture I'm working with is not required to deal with > asynchronously delivered events, but if it had to I would probably > create an external event queue. You would probably only want to push incoming events onto that queue while the thread is active and load them back on the regular queue when the thread completes. There are situations where you can count on the relative sequence of external events (but not their timing) in the OOA. In those situations there might already be an external event on your queue that needs to be processed. However, even without external events there is still a potential termination problem. The target objects could place events on the queue that lead to separate, arbitrarily complex processing. Since the event that your originating instance places on the queue after the return to cause its transition is self directed, it takes precedence so this would not normally be a problem. But if you are waiting for the queue to empty to do the return you could be in trouble. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- Mike Finn wrote: > I've just remembered a very powerful, but probably *proscribed*, > technique (that I still use) to solve this problem. It's called > Warping (OK, I just gave it a name!). > > The nice thing about this technique is that you don't have to worry > about waiting for all of the instances to complete their threads > before advancing to the next state. You just assume it's going to > be alright. :-) > > Basically, after the originating instance has sent all the events to > the other object, a synchronous call to the architecture is made > which has the effect of suspending the current state action and > processing the next event on the queue. In fact, it's just the same > call that starts the architecture processing an external event. > > When you come out of Warp (as it were) all the threads have > completed and it's safe to generate the event that moves the > originating instance to the next state. The danger with this technique is the possibility of a deadlock. The time rules of OOA say that if an event is delivered to an instance while it is processing a state action, then it is queued. The time scope of a "warp" is within an action; so, if any event is delivered to the warping instance (for any reason), then the event will not be delivered until the end of the warp. Unfortunately, the warp will not end until the event has been delivered. If the define the "next-event" as part of the process that enters the warp (I explained my delayed-event modification a few posts ago) then the current action completes before you enter the warp; but the event is not sent until the end of the warp. Of course, my method goes a bit beyond your warp: it keeps track of threads. Thus you can have multiple, concurrent, delayed events in the system. Your warp is a simpler, though less general, approach. But I'd still advise you to implement it in a way that moves the warp out of the state action. Dave. -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... Regarding the flop assert example: > The problem is that the OOA solution algorithm is inapproriate. There is a > huge semantic gap between the description in the problem domain, the structure > of > the OOA algorithm and the structure in the implementation. There is a much > smaller > gap between the problem domain and the implementation than between the problem > domain and the OOA. > > I think you are still not truely taking on board the fact that I am doing a > pure- > hardware implementation. You are correct, I did not think the entire model was a direct hardware implementation. I am inclined to think that this might not be an appropriate thing for an OOA to try to do. In fact, I think your first paragraph argues this point -- if the notation creates a larger semantic gap than exists directly between the problem domain and the implementation, there is something wrong with the picture. The problem I see is that an event has a very different meaning in this context than simply a message. As you point out the set of events implicitly captures both the idea of parallelism and a clock cycle. That is, one really should not need to use confirmation events because each Pin processes in parallel and completes on the clock cycle. This is because all those events sent to Pin _must_ arrive and _must_ be processed at the same time due to Physical Law. This is all antithetical to the S-M model of events and time. Interestingly, this reverses my position in a current discussion I am having with our old nemesis, Robert Martin, on OTUG where I am arguing the OT can be a general problem solving technique while he is arguing that it is only applicable to software. I think your example is a good one for demonstrating that the S-M notation, at least, is not suitable for hardware design (though it could be used to develop a gate level hardware simulator). I still think it is a more general problem solving technique, but thanks to your example I will have to back off hardware design as a viable arena. B-) > Close, but not quite. I want to [...] so that the analyst doesn't need to > specify a specific algorithm that implements the pattern. If it really was > pattern details that were being described, then it wouldn't be so bad. It > would meerly be redundant. But it seems to me that most design patterns I have seen are specific algorithms for handling interfaces. > "Generate events A to start threads; generate event B when consequent > threads are done" is not the same as "Generate events A; count number of > generates. Wait until this number of reply events are recieived. Then > generate event B". They might seem superficially similar; but the latter > has a significant implementiation bias. It can also lead to quite complex > OOA models (because of that implementation bias). In my view your first description is just as specific as the second. How is generating threads and waiting for them all to finish any less specific that generating confirmation events and waiting for them all the be received? As far as the complexity in the OOA models, I see no difference. Instead of generating a confirmation event in the target instances you have to add syntax to terminate the thread. Instead of counting the returned events in the originator, it has syntax to start each thread. Plus you have to define a bridge to monitor the threads and generate the final transition event when they are all done. BTW, don't you have a problem with the thread approach of knowing when the thread is terminated? It seems to me this implies that the state where the thread terminates has to have context information if there are ways to transition to it that aren't in the thread. > When a pattern has been seen a number of times, it is possible to see how its > use > varies; and what they have in common. The commonalities need no > parametarisation. > The variations can be analysed to identify the properties that describe these > variations. These can be described as an information model, and form the > basis for a formal parametarisation of the pattern. But there is much more to the pattern than an IM. An IM would merely provide a mapping for which OOA objects and relationships were relevant to the pattern. Most of the interesting parts of the pattern are dynamic; they involve behavior relevant to the interfaces between objects. The pattern dictates that you update attribute A in state B1's action and conditionally generate event E1 from state B3's action based upon the current value of A, etc. This is the part that I see as difficult to express in your meta model because it gets sprinkled all over the existing IM, SM, PM, and OCM. > > Also, it is not clear how one would "refer" to them in the OOA. > > I used a deliberately non-commital term. The mechanism of referal provides > the means to pass the parameters. If the pattern is behavioural (such as > synchronisation) then a process may be used. Other patterns need more > extreme measures (For example, my DFD proposal is a formalisation of the > pattern for updating an (M) attribute when its dependee attributes change). I agree that your DFD is an example. But it also took you awhile to come up with a formalism that is still not finalized and is still open to some, albeit minor, debate. And that solution represents major changes to the basic notation. This strikes me as a hard row to hoe just to support a single analysis pattern. [When I got to your last point I understood why your real reason for doing patterns this way.] > It is much easier to add information to the model that parametarises the > reset concept. Each attribute needs a reset value; each state model needs > a reset state. Plus a few more bits and pieces. Moving the concept into a > (the) meta model allows the reset state to be defined as static information, > not as a complex dynamic behaviour. In this case, the pattern is "refered" > to by adding the necessary information; and then, at some point in the > model, using a "reset" process/wormhole. (And, in practice, the reset config > is already defined as the initial population of the model). But why do it in the meta model? The events and attribute values must be identified in any case. The bridge is no more difficult to write for a particular application than parameterizing an existing meta model construct. But developing a bullet-proof construct in the meta model that will handle all possible convoluted reset protocols strikes me as a daunting task. [Again, your last point below made this more clear.] > > These references would also have the effect of splitting the essential solution > > description apart in ways that I don't like. > > You propose that I split my model into 2 domains, connected by a bridge. We > seem to be arguing about 2 different ways to decompose the problem. Both > have their strengths and weaknesses. I don't see how splitting a domain into > two, each time I want to sychronise a set of threads, is better than adding > a new specialised event generator (plus its semantics). It may be purer SM, > but that is only one element of "good". My issue is that if you use a meta model construct to handle _some_ of the behavior and relationships -- as is the case for most analysis patterns -- I have to jump back an forth between the meta model and the OOA to understand what is happening at a given level of abstraction. > > For example, suppose I have an application that would benefit from using analysis > > pattern X but due to my parochial proclivities I am unaware of pattern X. I go > > ahead and create an OOA that solves my problem. By sheer dumb luck my solution > > exactly matches that prescribed by analysis pattern X. I submit that my OOA would > > look exactly like and would be translated in the same way as that of someone else > > who solved the same problem using analysis pattern X (provided they did not dink > > with their meta model). > > I'm not sure if this is relevant. If you don't "dink" with the meta model, then > the OOA formalism constrains you to model a concept in one of a small number of > ways. So it is quite likely that you will accidently use that same idiom as > a person who already knows that idiom. And because they are both the same, they > both have the same weaknesses (and strengths). Fascinating. So what you are really after are extensions of the notation that allow you to do neat things that you can't do now. I don't think that is what analysis patterns are about. I believe they are supposed to be used to describe idioms in the prevailing notation, regardless of whether that notation is constraining. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com "Lynch, Chris D. SDX" writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lahman and Whipp (on a very old subject, so I apologize if some readers don't recognize it.) >> (Whipp) The problem is that the OOA solution algorithm is inapproriate. There is a >> huge semantic gap between the description in the problem domain, the structure >> of >> the OOA algorithm and the structure in the implementation. There is a much >> smaller >> gap between the problem domain and the implementation than between the problem >> domain and the OOA. >> >> I think you are still not truely taking on board the fact that I am doing a >> pure- >> hardware implementation. > >(Lahman) You are correct, I did not think the entire model was a direct hardware implementation. I >am inclined to think that this might not be an appropriate thing for an OOA to try to do. >In fact, I think your first paragraph argues this point -- if the notation creates a larger >semantic gap than exists directly between the problem domain and the implementation, there is >something wrong with the picture. Thanks to both of you for a compelling and enlightening answer to the old ESMUG question, "under what circumstances is SM OOA inappropriate?" Clearly, this occurs when there are other tools (both software and conceptual) specifically designed to solve the problem at hand. Paraphrasing Michael Jackson, a method which knows what you are trying to do will be much more helpful than one which doesn't. -Chris ------------------------------------------- Chris Lynch Abbott Ambulatory Infusion Systems San Diego, Ca LYNCHCD@HPD.ABBOTT.COM ------------------------------------------- Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > You are correct, I did not think the entire model was a direct hardware implementation. I > am inclined to think that this might not be an appropriate thing for an OOA to try to do. > In fact, I think your first paragraph argues this point -- if the notation creates a larger > semantic gap than exists directly between the problem domain and the implementation, there is > something wrong with the picture. > > The problem I see is that an event has a very different meaning in this context than simply a > message. As you point out the set of events implicitly captures both the idea of parallelism > and a clock cycle. That is, one really should not need to use confirmation events because > each Pin processes in parallel and completes on the clock cycle. This is because all those > events sent to Pin _must_ arrive and _must_ be processed at the same time due to Physical > Law. This is all antithetical to the S-M model of events and time. > > Interestingly, this reverses my position in a current discussion I am having with our old > nemesis, Robert Martin, on OTUG where I am arguing the OT can be a general problem solving > technique while he is arguing that it is only applicable to software. I think your example > is a good one for demonstrating that the S-M notation, at least, is not suitable for hardware > design (though it could be used to develop a gate level hardware simulator). I still think > it is a more general problem solving technique, but thanks to your example I will have to > back off hardware design as a viable arena. B-) I wouldn't give up just yet. OOA is not for hardware design; but then, its not for software *design* either. That's what RD is for (In hardware design, once RD reaches the RTL level, then existing tools can finish the job). Most of SM-OOA is applicable to hardware problems. Data and relationships (i,e, the OIM) are the same for a hardware problem. State models are similarly unchanged. Logical messages are exchanged between components; and events are an appropriate abstractions. Every example I have found, so far, which causes problems for a hardware implementation, has been apparent during analysis of the problem. The concept setting things in progress, and then doing something else when they're done, is an obvious concept in a problem domain. The concept of putting things back to a known configuration is also a problem-domain concept. I started using SM for modelling hardware because we wanted to produce high-speed software models of microcontrollers. The first h/w model I produced using SM had a very complete specification: a set of schematics for the working hardware. I spend a lot of effort working out what implementation details should be abstracted away. Because, at that time, we were doing software implementations: we were able to use the standard kludges. Events counters for sychronisation; complex state models to allow reset. It felt wrong, but it worked. Now I'm revisiting the models to see if SM can be used to specify hardware _before_ it is designed. The answer seems to be yes, but we need to stop doing the kludges. Recursive Design is a very good fit onto the front end of the hardware design process. Chris, Lynch wrote: > Thanks to both of you for a compelling and enlightening answer to the old > ESMUG question, "under what circumstances is SM OOA inappropriate?" > Clearly, this occurs when there are other tools (both software and > conceptual) specifically designed to solve the problem at hand. > Paraphrasing Michael Jackson, a method which knows what you are trying to do > will be much more helpful than one which doesn't. The meta-model is the thing that defines what the method "knows". The OOA meta model knows plenty about data modelling, but not much about behaviour. Its data model is normalised; its behaviour model is minimal. Concepts like hardware clocks do not belong in the OOA. They are architectural details; and we already have pretty good tools for manipulating RTL architectures. The tools for hardware design (both actual and conceptual) end at the RTL level (datapaths and pipelines are notational abstrations over RTL; but there's no significant additional behavioural abstraction). There are a number of attempts to go beyond this, but there is no concensus. If we ignore specialised domains such as processors and memory, then there are no good techniches for handling the problems of million-gate IC designs. Something like OOA might work, but it needs to understand behaviour a lot better than it currently does. Lahman quoting me again: > > "Generate events A to start threads; generate event B when consequent > > threads are done" is not the same as "Generate events A; count number of > > generates. Wait until this number of reply events are recieived. Then > > generate event B". They might seem superficially similar; but the latter > > has a significant implementiation bias. It can also lead to quite complex > > OOA models (because of that implementation bias). > > In my view your first description is just as specific as the second. How is generating > threads and waiting for them all to finish any less specific that generating confirmation > events and waiting for them all the be received? > > As far as the complexity in the OOA models, I see no difference. Instead of generating a > confirmation event in the target instances you have to add syntax to terminate the thread. > Instead of counting the returned events in the originator, it has syntax to start each > thread. Plus you have to define a bridge to monitor the threads and generate the final > transition event when they are all done. > > BTW, don't you have a problem with the thread approach of knowing when the thread is > terminated? It seems to me this implies that the state where the thread terminates has to > have context information if there are ways to transition to it that aren't in the thread. Lets take a simple example first. Lets assume that a thread is completely contained within a domain. Lets also assume that an action is executed for exactly one thread. An action can to one of 3 things (for this viewpoint). It can generate zero, one or many events. If it generates zero events, then the thread ends (A special case is the STT with an "ignore" entry). If it generates 1 event, then the thread continues in the next action (and thus is carried by the event). If the action generates many events, then the thread branches, and then continues in many next actions. (To keep definitions simple, I'll refer to the set of branches as all being the same thread). The thread is propogated by all outgoing events. Eventually, each branch will reach a state that generates no more events. When all branches have reached such a state, the thread ends. To put it another way, when there are no more events related to the thread, the thread-done delayed-event is dispatched. If there are several threads in the system, then the situation is unchanged. It is simple to show that 2 threads can never merge: each action is entered by exactly one event. The only case that must concern us is that an action may create a new thread (using the sychronising delayed-event generator I described in a previous post). In this case, it is obvious that the previous thread cannot complete until the new thread is complete. Thus the thread is completely nested. This allows me to maintain my initial assumption that each action is executed for exactly one thread. The remaining problem is: what happens when a thread enters a wormhole? If the wormhole has a sychronous response, then it is part of the action, and no complications arise. For an asynchronous response the thread must be reattached to the solicited event. This requires a bit of tinkering, but is isn't too dificult. If the "away" domain is modelled in SM, then there is no problem because the thread can continue in that domain. So, hopefully you can see that there is no need for an explicit "thread done" process. A model that says "generate event X after synch events Y" is a lot less complex than one where you need explicit synchronisation events. And it makes it very easy to map to an architecture that processes all events Y on the next clock cycle; and delivers event X at the same time. > But why do it in the meta model? The events and attribute values must be identified in any > case. The bridge is no more difficult to write for a particular application than > parameterizing an existing meta model construct. But developing a bullet-proof construct in > the meta model that will handle all possible convoluted reset protocols strikes me as a > daunting task. [Again, your last point below made this more clear.] The reason for doing it in the meta model is quite simple: I can't simulate the behaviour unless its in the meta model. Lets say I add a "reset value" to each attribute; and a "reset state" to each state model. When I activate the "reset" wormhole, I expect all the attributes and state models to respond. This is an architectural responsibility. Doing it with wormholes is incredibly messey (I've done it a few times). Doing it in the meta model (and thus the simulator and architecture) is easy, because it has a single object named "attribute" (or equivalent); and another for the state model. Yes, handing all the different reset protocols is a bit more messey. But if you don't try to generalise too far, then its a tractable problem. Its just a matter of extending the meta-model on a per-project basis, rather than having one universal model. Current case tools can't handle that; but thats a tool problem, not a conceptual one. Most of the tweaking is just that. I don't need a completely new model each time. I can envisage tools that allow me to extend models via inheritance and delegation. (SES had/have a tool named Genesis that allows you to extend architectures using these techniques). I should be possible to define meta-model extentions in a way that doesn't break OOA simulators; and which allows the extention to be fully utilised by the translator. > My issue is that if you use a meta model construct to handle _some_ of the behavior and > relationships -- as is the case for most analysis patterns -- I have to jump back an forth > between the meta model and the OOA to understand what is happening at a given level of > abstraction. I won't pretend its always easy. If you find yourself jumping back and forward, then there's some pollution somewhere. A domain should completely describe a subject matter using concepts that make sense within that subject matter. When you extend the meta model, the manner of extention must be natural. If you start created objects named "tag" or "property", then you are not properly analysing. (Use tags/properties/colorations to add architectural information that is not needed for simulation) > Fascinating. So what you are really after are extensions of the notation that > allow you to do neat things that you can't do now. I don't think that is what > analysis patterns are about. I believe they are supposed to be used to describe > idioms in the prevailing notation, regardless of whether that notation is > constraining. Whilst I agree with you comment for "design" patterns, I'm not sure it can be applied to analysis patterns. Indeed, the technical use of the word "pattern", when applied to analysis, may be an oxymoron: patterns are a design (or architecture) concept. In analysis, we just find recurring themes. If you can apply the patterns philosophy to an analysis notation, then that suggests that the notation is actually a design notation. In analysis, I want to explore the requirements; and present my understanding as an unambiguous model. I want the meta model which underlies that model to provide a set of concepts that allow me to present the model in a language that a problem-domain expert will understand. If I have to use idioms to work round limitations of the notation, then these will obscure the information that the model is attempting to convey. Fine granularity of detail (i.e. atomic data) is not a problem. But we also need atomic behaviours. Imagine a data model where you couldn't use real numbers; and you had to use 3 attributes to explicitly model a floating point number. We use the concept of attribute domains to avoid such absurdities. How to we avoid similar absurdities for similarly atomic behaviours? If the meta model includes a means to attach bridges to types, then we can add powerful attribute domains without messing with the meta model. These bridges define the set of transforms that can be applied to data of the class (as Date said, a class is an attribute domain). There doesn't appear to be a similar mechanism for importing primitive behavioural patterns; so we have to adjust the meta model as we discover behavioural primatives. And, of course, some things are best expressed using the existing wormhole/ bridge mechanism. Dave. -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Lynch... > Thanks to both of you for a compelling and enlightening answer to the old > ESMUG question, "under what circumstances is SM OOA inappropriate?" > Clearly, this occurs when there are other tools (both software and > conceptual) specifically designed to solve the problem at hand. Alas, after a little cogitation in the shower this morning, I am not completely convinced myself about the specific hardware implementation case. B-) I can think of abstractions that superficially solve the problem. Much of the parallelism issue stems from the idea of sending separate signals simultaneously to several different pins. However, if one raises the level of abstraction so that I/O Ports talk to I/O Ports, only one event is needed to characterize a Signal passing between them. One can also think of an event representing the clock edge that is sent to all instances periodically. The instances could then change state or not as appropriate. These events could be sent by the architecture. If one assumes the simultaneous view of time, then they could be processed simultaneously by each instance. This would address the clock synchronization of signals in different parts of the circuit. This seems to hang together at the megathinker level, but I suspect it is fraught with problems at the detail level. For example, how does one distinguish between the Signal events generated by OOA actions and the architectural clock edge events to keep the sequencing straight? Or how does one transition from the I/O Port interface abstraction to the gate level processing of an ASIC? I only mention it because it seems plausible enough to suggest that there _may_ be a way to implement hardware from an OOA if the set of abstractions is correct. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com smf@cix.compulink.co.uk (Mike Finn) writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to lahman... > > Yes, you're correct. Control returns, normally to the main program > > but if Warping to the action, once the internal event queue is > > empty. > > The architecture I'm working with is not required to deal with > > asynchronously delivered events, but if it had to I would probably > > create an external event queue. > You would probably only want to push incoming events onto that queue while the thread is > active and load them back on the regular queue when the thread completes. In practice, the external event queue would simply be a OS provided mailbox. It's up to the bridge to the implementation domain to convert the message to an event and start up the architecture to process it and any resulting internal events. > There are > situations where you can count on the relative sequence of external events (but > not their timing) in the OOA. In those situations there might already be an external > event on your queue that needs to be processed. The order in which external events are processed is not changed by warping. > However, even without external events there is still a potential termination problem. > The target objects could place events on the queue that lead to separate, arbitrarily > complex processing. Since the event that your originating instance places on the queue > after the return to cause its transition is self directed, it takes precedence so this > would not normally be a problem. But if you are waiting for the queue to empty to do > the return you could be in trouble. I don't understand the last bit. By the time the originating instance places its event on the internal queue the warp is already over. Mike -- Mike Finn Dark Matter | Email: smf@cix.co.uk Systems Ltd | Voice: +44 (0) 1483 755145 smf@cix.compulink.co.uk (Mike Finn) writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Dave Whipp... > > I've just remembered a very powerful, but probably *proscribed*, > > technique (that I still use) to solve this problem. It's called > > Warping (OK, I just gave it a name!). [...] > The danger with this technique is the possibility of a deadlock. If that's the only one then it's much better than I expected. :-) > The time rules of OOA say that if an event is delivered to an instance > while it is processing a state action, then it is queued. The time scope > of a "warp" is within an action; so, if any event is delivered to the > warping instance (for any reason), then the event will not be delivered > until the end of the warp. Unfortunately, the warp will not end until > the event has been delivered. I should hope the warping instance would not receive such an event. In my architecture it would have to be an internal event anyhow. Things could get a bit chaotic if it did happen. I suspect the technique would only work with a few types of architecture. > If the define the "next-event" as part of the process that enters the > warp (I explained my delayed-event modification a few posts ago) then > the current action completes before you enter the warp; but the event > is not sent until the end of the warp. > Of course, my method goes a bit beyond your warp: it keeps track of threads. > Thus you can have multiple, concurrent, delayed events in the system. Your > warp is a simpler, though less general, approach. But I'd still advise you > to implement it in a way that moves the warp out of the state action. Mike -- Mike Finn Dark Matter | Email: smf@cix.co.uk Systems Ltd | Voice: +44 (0) 1483 755145 Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > I can think of abstractions that superficially solve the problem. Much of the > parallelism issue stems from the idea of sending separate signals simultaneously to > several different pins. However, if one raises the level of abstraction so that > I/O Ports talk to I/O Ports, only one event is needed to characterize a Signal > passing between them. It is true that I/O ports talk to I/O ports. Indeed, a serial port doesn't need to send a bitstream if its talking to another serial port. It can send acharacter stream, parameterised by throughput, latency and error model. The problem is, when you're doing an ASIC implementation of the serial port, that type of abstraction is inappropriate. > One can also think of an event representing the clock edge that is sent to all > instances periodically. The instances could then change state or not as > appropriate. These events could be sent by the architecture. If one assumes the > simultaneous view of time, then they could be processed simultaneously by each > instance. This would address the clock synchronization of signals in different > parts of the circuit. It is wrong to put a clock in the OOA model (IMHO). The clock is an abstraction in the architecture, which is used to regulate behaviour. If you start putting a clock in the OOA then (a) you bias the model against a software implementation; and (b) you overly constrain the hardware implementation. I only need to know causal relationships between actions. The problem with reply-events in sychronisation is that causality may be reversed. In a hardware implementation, it may be the receiver that generates the event from the sender! Of course, this isn't true for all hardware. Sometimes there is a handshaking protocol. These issues should be excluded from the OOA so they can be addressed in the architecture(s). > This seems to hang together at the megathinker level, but I suspect it is fraught > with problems at the detail level. For example, how does one distinguish between > the Signal events generated by OOA actions and the architectural clock edge events > to keep the sequencing straight? Or how does one transition from the I/O Port > interface abstraction to the gate level processing of an ASIC? The details are a problem. I don't think you've identified the real issues; but there are definitely some problems. Linking abstractions is, I think, the Key. Wormholes (as currently defined) are not up to the job. They link within the OOA abstraction, whereas we need techniques that flow to different abstractions. And the links have got to be very general. When you migrate an OOA model onto a cycle based architecture, you need to tighten the test suite at the same time. As you noted a few posts ago, I do have some extreme ideas about architectures, translators and RD in general. The one good point is that you don't need to go to gate level. Synthesis will do that. You only need to produce high quality RTL. > I only mention it because it seems plausible enough to suggest that there _may_ be > a way to implement hardware from an OOA if the set of abstractions is correct. I'm sure there is. But the abstraction called OOA is one of those that has to be correct. Dave. -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. Chris Raistrick writes to shlaer-mellor-users: -------------------------------------------------------------------- Hi There, Those of you following the Design Patterns thread may find the following patterns useful. They are available for free download at http:/www.kc.com. The Compatibility Pattern The Characteristic Pattern Role Migration Pattern Unordered Operations Pattern Prioritised and/or Ordered Operations Pattern Complex Resource Assignment Pattern Instance Creation Pattern Instance Deletion Pattern Peer-to-Peer Counterpart Pattern Counterpart Bridge with Enumerated Mappings Pattern The document also includes a couple of introductory slides explaining the extensions we have made to the OOA96 method version to form OOA97. Please note the copyright notice on each page of the presentation. This material is the intellectual property of Kennedy Carter Limited. It must not be included in any training material or other published work without the express permission of Kennedy Carter Limited. Best Regards, Chris Raistrick. *********************************************************************** Chris Raistrick tel : +44 1483 483200 Kennedy Carter Ltd fax : +44 1483 483201 14 The Pines web : http://www.kc.com Broad Street email : chris@kc.com Guildford GU3 3BH UK "We may not be Rational but we are Intelligent" ************************************************************************ lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Finn... > > There are > > situations where you can count on the relative sequence of external events (but > > not their timing) in the OOA. In those situations there might already be an external > > event on your queue that needs to be processed. > > The order in which external events are processed is not changed by > warping. True, but the order of external and internal events _is_ changed. It is fair for an OOA to rely upon a problem space constraint that orders the external events. Since self-directed events events take precedence, the OOA might use them to set a state machine to the proper state to accept the second event. I speculate that with some thought one could come up with a convoluted example to demonstrate a sequencing problem. As a simpler example, suppose your originator only requests the processing in response to the first of two external events. Now suppose that the requested processing involves a wait state for the second external event. The most likely outcome is that you will exit warp prematurely (i.e., before the desired processing is complete) when the wait state is entered. But it is possible, if there is unrelated processing (see below), to get deadlocks or incorrect processing. > > However, even without external events there is still a potential termination problem. > > The target objects could place events on the queue that lead to separate, arbitrarily > > complex processing. Since the event that your originating instance places on the queue > > after the return to cause its transition is self directed, it takes precedence so this > > would not normally be a problem. But if you are waiting for the queue to empty to do > > the return you could be in trouble. > > I don't understand the last bit. By the time the originating > instance places its event on the internal queue the warp is already > over. Suppose some action in the initiated processing generates two events. One is the event that announces it has completed the processing the originator wanted. The other event starts a whole new chain of unrelated (to the originator) processing. That unrelated processing will continue to throw events on the queue preventing the exit from warp even after all the processing that the originator was really interested in was done. Eventually you will either get a deadlock or an event to an invalid state. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- Chris Raistrick wrote: > Those of you following the Design Patterns thread may find the following > patterns useful. They are available for free download at http:/www.kc.com. I've had a look at the patterns. I have 2 comments. Firstly, they aren't really patterns. They are examples. A Pattern includes an example, but must also contain descriptions of when to use them, when not to, consequences, related patterns, etc. However, we all know how hype works with new buzwords. Most of the pattens were pretty good, solid examples of how to model various situations. However, one struck me as being a bit unsafe. Perhaps you can clarify. The problems are in your "unordered operations" example. "Pending Operation", in state 2, deletes itself and then generates AT2 if thre are no more operations. Unfortunately, this can go wrong. Your model suggests that operations can progress concurrently (there is no "else do next operation" in the process model). If operations are done in parallel, then they can also complete in parallel. case 1: multiple AT2 events. Two or more operations complete at the same time; all delete themselves and see that no other operations exist. So all generate AT2. This will have undesiried consequences unless very carefully handled. case 2: zero AT2 events. The same scenario as above; but there is a delay between the "delete" clause and other instances seing that it has been deleted. This my be deliberate (e.g. as a solution to "data-set consistancy", you might decide to hide the effects of accesors until the action completes). In this case, the final N objects complete simulataneously, but they all see that the others still exist. So none of them generates AT2. Dave. -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > It is true that I/O ports talk to I/O ports. Indeed, a serial port doesn't > need to send a bitstream if its talking to another serial port. It can send > acharacter stream, parameterised by throughput, latency and error model. The > problem is, when you're doing an ASIC implementation of the serial port, that > type of abstraction is inappropriate. Sure, but the event abstraction between two I/O ports would be in a different domain at a different level of abstraction than the implementation of a serial port in an ASIC. In the high level domain the event carries a Signal value that is implemented as a handle to a data structure. In the lower level domain that Signal value might get expanded by the bridge to a set of characters. I am not enthusiastic about this sort of hierarchical definition of domains because it seems clumsy and it could get rather tedious crossing several orders of magnitude of abstraction between System and Gate. OTOH, it has a certain plausibility because of the similarity of approach to HDLs like VHDL. I agree that the leap from the block diagram description somewhere in the ASIC to gates is daunting. > It is wrong to put a clock in the OOA model (IMHO). The clock is an abstraction > in the architecture, which is used to regulate behaviour. If you start putting a > clock in the OOA then (a) you bias the model against a software implementation; > and (b) you overly constrain the hardware implementation. I only need to know > causal relationships between actions. The problem with reply-events in > sychronisation is that causality may be reversed. In a hardware implementation, > it may be the receiver that generates the event from the sender! I did not intend to model the clock within a domain where state transitions needed to be synched. I was assuming external, simultaneous events representing the clock edges. I would expect those to be postulated. If one also had to design the clock, that would be in a separate domain. > The details are a problem. I don't think you've identified the real issues; > but there are definitely some problems. Linking abstractions is, I think, the > Key. Wormholes (as currently defined) are not up to the job. They link within > the OOA abstraction, whereas we need techniques that flow to different > abstractions. And the links have got to be very general. When you migrate > an OOA model onto a cycle based architecture, you need to tighten the > test suite at the same time. As you noted a few posts ago, I do have some > extreme ideas about architectures, translators and RD in general. I might agree with this using the implied restriction in the wormhole paper to syntactic shifts in the bridge. However, as I recall we both argued that semantic shifts need to be accommodated as well. If the bridge can do a semantic shift and the domains have the appropriate levels of abstraction, then it seems to me that wormholes might be able to do the job. > The one good point is that you don't need to go to gate level. Synthesis will > do that. You only need to produce high quality RTL. I assume you mean Register Transfer Language here. This would certainly make life easier. I also think this should be realized rather than OOA. But we may have different views of "produce". In our context I am viewing the OOA as a description of the hardware design. The RD would translate this into the appropriate drawings, specifications, etc. for fabrication. I assume that when you produce RTL you want the translator to create it from the OOA. I would regard the RTL as something a transform does from a higher level abstraction and the translator would simply package that output with the other documents. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Ted Barbour writes to shlaer-mellor-users: -------------------------------------------------------------------- Lynch, Chris D. SDX wrote: > > "Lynch, Chris D. SDX" writes to shlaer-mellor-users: > -------------------------------------------------------------------- > Thanks to both of you for a compelling and enlightening answer to the old > ESMUG question, "under what circumstances is SM OOA inappropriate?" > Clearly, this occurs when there are other tools (both software and > conceptual) specifically designed to solve the problem at hand. > > Paraphrasing Michael Jackson, a method which knows what you are trying to do > will be much more helpful than one which doesn't. For anyone attending the Spring '99 Embedded Systems Conference in Chicago, our colleague Carolyn Duby will be presenting a paper on this very subject. "There are other tools specifically designed to solve the problem at hand" is but one reason for not using OOA on a particular domain. Carolyn's paper also discusses several case studies where domain requirements could be most easily satisfied by legacy code or pre-packaged libraries. Additionally, it is not uncommon to create systems where not all contributing development teams are using OOA. Issues with partitioning and integrating such systems are also discussed in the paper. Ted -- ________________________________________________ Pathfinder Solutions Inc. www.pathfindersol.com 888-OOA-PATH effective solutions for software engineering challenges Ted Barbour voice: +01 508-875-0179 tedb@pathfindersol.com fax: +01 508-384-7906 ________________________________________________ Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- Mike Finn wrote: > > The danger with this technique is the possibility of a deadlock. > > If that's the only one then it's much better than I expected. :-) If you're using the same event queue in the warp, then you won't be violating any of the timing rules of OOA -- they only specify instance-to-instance event ordering. You get a deadlock if you use the same instance; so avoiding the deadlock allows you keep to the OOA event ordering rules. If the warp causes a very long delay because of difficulties emptying the event queue, then the resulting performance may be undesirable in a response-time constrained system. But you aren't breaking the rules of OOA; just the requirements of the application. Dave. -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. Chris Raistrick writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Dave Whipp. Dave Whipp wrote > >Chris Raistrick wrote: > >> Those of you following the Design Patterns thread may find the following >> patterns useful. They are available for free download at http:/www.kc.com. > >I've had a look at the patterns. I have 2 comments. Firstly, they aren't >really patterns. They are examples. A Pattern includes an example, but must >also contain descriptions of when to use them, when not to, consequences, >related patterns, etc. However, we all know how hype works with new buzwords. > OK - it's a fair cop, gov'nor! >Most of the pattens were pretty good, solid examples of how to model various >situations. However, one struck me as being a bit unsafe. Perhaps you can >clarify. > >The problems are in your "unordered operations" example. "Pending Operation", >in state 2, deletes itself and then generates AT2 if thre are no more >operations. Unfortunately, this can go wrong. Hmmm. > >Your model suggests that operations can progress concurrently (there is no >"else do next operation" in the process model). If operations are done in >parallel, then they can also complete in parallel. I agree. > >case 1: multiple AT2 events. Two or more operations complete at the >same time; all delete themselves and see that no other operations exist. >So all generate AT2. This will have undesiried consequences unless >very carefully handled. Perhaps I have missed something, but I would deal with this by specifying an "Ignore" effect in the STT for event AT2:all_ops_complete in all states other than state 1 (Waiting). > >case 2: zero AT2 events. The same scenario as above; but there is a >delay between the "delete" clause and other instances seing that it has >been deleted. This my be deliberate (e.g. as a solution to "data-set >consistancy", you might decide to hide the effects of accesors until the >action completes). In this case, the final N objects complete >simulataneously, but they all see that the others still exist. So >none of them generates AT2. It is not the "delete this" process that matters here, it is the statement: "unlink completed_thing R1.Type_Of_Operation pending_op" This deletes the instance of R1 linking the "Actual Thing" instance to the previously pending "Type of Operation" instance through R1. This unlink statement executes synchronously, and is known to have completed before the action processing continues. Therefore, the subsequent test for an empty set of related instances appears to be safe. If two instances simultaneously execute the unlink, and therefore both notice that there are no more R1 instances for the "Actual Thing" in question, the result is your "case 1". > > Chris *********************************************************************** Chris Raistrick tel : +44 1483 483200 Kennedy Carter Ltd fax : +44 1483 483201 14 The Pines web : http://www.kc.com Broad Street email : chris@kc.com Guildford GU3 3BH UK "We may not be Rational but we are Intelligent" ************************************************************************ lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Whipp... > I wouldn't give up just yet. OOA is not for hardware design; but then, its not > for software *design* either. That's what RD is for (In hardware design, once > RD reaches the RTL level, then existing tools can finish the job). We disagree about this, but I didn't realize how much until much later. > Most of SM-OOA is applicable to hardware problems. Data and relationships (i,e, > the OIM) are the same for a hardware problem. State models are similarly > unchanged. Logical messages are exchanged between components; and events are > an appropriate abstractions. I think that state models are arguable. There is still the issue of synchronization. I can't see modeling hardware behavior without using the simultaneous view of time to allow event actions to be processed in parallel in the domain. However, the S-M notation does not provide a mechanism for indicating _which_ events are processed in parallel. [In fact, S-M doesn't allow events to be processed in parallel; it simply allows multiple actions to overlap execution -- the events are assumed to pop off the queue one at a time.] Even if you use external clock edge events for changing states in lockstep, there is the problem of distinguishing between the OOA generated events and the synching events. I think this takes some special queue management to ensure the OOA events and the synching events are segregated. It may be possible to do this in the architecture (e.g., your example below), but I am not convinced that it could be done without some sort of OOA syntax in the models. > Because, at that time, we were doing software implementations: we were able > to use the standard kludges. Events counters for sychronisation; complex > state models to allow reset. It felt wrong, but it worked. Now I'm revisiting > the models to see if SM can be used to specify hardware _before_ it is > designed. The answer seems to be yes, but we need to stop doing the kludges. > Recursive Design is a very good fit onto the front end of the hardware > design process. My argument is that if one is creating a hardware design, then correct abstractions could eliminate the problem. For example, the idea of waiting for processing to finish is probably _not_ very relevant to hardware. As you pointed out elsewhere, the signal propagates on the the clock edge (subject to delays). If you postulate external events for the clock edges, this problem goes away and no counter is needed. > Eventually, each branch will reach a state that generates no more events. When > all > branches have reached such a state, the thread ends. To put it another way, when > there are no more events related to the thread, the thread-done delayed-event > is dispatched. Not necessarily. Assuming a counter, suppose I reach a state that generates two events: one to announce completion of the desired processing and another to start a chain of unrelated processing. If you substitute a thread for the counter, this action still produces one event for the unrelated processing. How do you know to terminate the thread? I think you need specific syntax in the action to indicate termination of the thread. > The remaining problem is: what happens when a thread enters a wormhole? If > the wormhole has a sychronous response, then it is part of the action, and > no complications arise. For an asynchronous response the thread must be > reattached to the solicited event. This requires a bit of tinkering, but is > isn't too dificult. If the "away" domain is modelled in SM, then there is > no problem because the thread can continue in that domain. There is another problem. What if there is a wait state in the thread where the processing can't continue until some other, possibly external, event arrives? It seems to me that your thread will terminate prematurely unless you apply some OOA syntax to say "Don't Give Up Yet" in that state. > The reason for doing it in the meta model is quite simple: I can't simulate > the behaviour unless its in the meta model. Lets say I add a "reset value" > to each attribute; and a "reset state" to each state model. When I activate > the "reset" wormhole, I expect all the attributes and state models to > respond. This is an architectural responsibility. Doing it with wormholes > is incredibly messey (I've done it a few times). Doing it in the meta model > (and thus the simulator and architecture) is easy, because it has a single > object named "attribute" (or equivalent); and another for the state model. This seems to be a matter of perspective. In our models only a modest fraction of attributes and very few state models are affected by a reset. Therefore a reset is not a generic model operation for us. > > My issue is that if you use a meta model construct to handle _some_ of the behavior and > > relationships -- as is the case for most analysis patterns -- I have to jump back an forth > > between the meta model and the OOA to understand what is happening at a given level of > > abstraction. > > I won't pretend its always easy. If you find yourself jumping back and forward, > then there's some pollution somewhere. A domain should completely describe a > subject matter using concepts that make sense within that subject matter. When > you extend the meta model, the manner of extention must be natural This is where I see the problem. When you move an analysis pattern into the meta model, you are moving part of the natural domain description into the meta model. To follow that "complete description" you need two things: some syntax in the OOA to indicate which pattern is being used and an unfailing memory of what has been moved into the meta model for that pattern. I can see your (M) proposal as an example of the sort of thing you seem to be advocating. But I think that is quite special. Though it is a pattern, it is already at the meta level because it does not deal with specific problem space issues -- it deals with fundamental changes to the notation and the way one builds an OOA that apply universally to all OOA models. Analysis patterns are far more prosaic and localized; they simply describe a narrow set of solution commonalties. Trying to move those commonalties into the meta model for tens, hundreds, or perhaps thousands of such patterns is not going to leave a very clear or complete OOA notation. > Whilst I agree with you comment for "design" patterns, I'm not sure it can be > applied to analysis patterns. Indeed, the technical use of the word "pattern", > when applied to analysis, may be an oxymoron: patterns are a design (or > architecture) concept. In analysis, we just find recurring themes. If you can > apply the patterns philosophy to an analysis notation, then that suggests that > the notation is actually a design notation. You certainly aren't shy about reading the world into the D of RD! I see an OOA as the design of a complete solution to a problem -- it just happens to be an abstract one. RD is a specialized mechanism for converting an abstract solution into a concrete one. [Note I used "mechanism" advisedly -- I do not consider translation to be a design activity. (But I would consider creating an architecture as a design activity.)] Regardless of what one calls it, patterns are relevant to developing the description of a solution. Therefore they are relevant to building an OOA and to building an architecture. But I would think the patterns are different for each, which is why I thought we distinguished between "analysis patterns" and "design patterns" early on in this thread. > In analysis, I want to explore the requirements; and present my understanding > as an unambiguous model. I want the meta model which underlies that model > to provide a set of concepts that allow me to present the model in a > language that a problem-domain expert will understand. If I have to use > idioms to work round limitations of the notation, then these will obscure > the information that the model is attempting to convey. I don't think patterns are about extending notations or getting around notation limitations. They are meant to be expressed in the _existing_ notation. They are idioms of model expression. In our case an analysis pattern is a description of an OOA solution idiom that recurs because a particular problem recurs in the application or in different applications. > Fine granularity of detail (i.e. atomic data) is not a problem. But we also > need atomic behaviours. Imagine a data model where you couldn't use real > numbers; and you had to use 3 attributes to explicitly model a floating > point number. We use the concept of attribute domains to avoid such > absurdities. How to we avoid similar absurdities for similarly atomic > behaviours? Wow. I have absolutely no idea where you are going with this. Hopefully you accidentally deleted a paragraph before this. Otherwise I may have to take my karma into the shop. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- Chris Raistrick wrote: [responding to whipp] > >case 1: multiple AT2 events. Two or more operations complete at the > >same time; all delete themselves and see that no other operations exist. > >So all generate AT2. This will have undesiried consequences unless > >very carefully handled. > > Perhaps I have missed something, but I would deal with this by specifying > an "Ignore" effect in the STT for event AT2:all_ops_complete in all states > other than state 1 (Waiting). What if, on receiving the AT2 event, a consequent effect is that a new set of operations is created. (remember, we don't know when the unwanted AT2 event will be delivered. The sending instance has ben deleted; so there as absolutely no timing rules that effect this). So, the user creates a new set of operations, and the STT moves back into state 1 where it is waiting for the new set of operations to complete (i.e. it no longer ignores the event). Then it recieves an AT2 event (the unwanted one) before the new set of operations is complete. > >case 2: zero AT2 events. The same scenario as above; but there is a > >delay between the "delete" clause and other instances seing that it has > >been deleted. This my be deliberate (e.g. as a solution to "data-set > >consistancy", you might decide to hide the effects of accesors until the > >action completes). In this case, the final N objects complete > >simulataneously, but they all see that the others still exist. So > >none of them generates AT2. > > It is not the "delete this" process that matters here, it is the statement: > > "unlink completed_thing R1.Type_Of_Operation pending_op" > > This deletes the instance of R1 linking the "Actual Thing" instance to the > previously pending "Type of Operation" instance through R1. This unlink > statement executes synchronously, and is known to have completed before the > action processing continues. Therefore, the subsequent test for an empty > set of related instances appears to be safe. If two instances > simultaneously execute the unlink, and therefore both notice that there are > no more R1 instances for the "Actual Thing" in question, the result is your > "case 1". I will agree that the action that does the unlink knows that the unlink has completed; but do other instances? Is a relationship manipulation an atomic operation; or can 2 instances manipulate the same relationship and remain unaware of the other's actions. Perhaps you can point me to a reference which descibes the exact rules for determining the dataset seen by an instance for the duration of an action? Preferably not vendor-specific ;-). However, even if my case-2 results in a case-1 scenario, its still a problem. Dave. -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. Dave Whipp writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman wrote: > > The one good point is that you don't need to go to gate level. Synthesis will > > do that. You only need to produce high quality RTL. > > I assume you mean Register Transfer Language here. This would certainly make life > easier. RTL is register-trasfer-l*** (l*** = level/language/layer: take your pick). It is a well defined architecture: combinatorial logic driving clocked flip flops. RTL code is generally written in either VHDL or Verilog - using subsets of these languages. Synthesis tools don't really mind which language you use; its the structure that's important. > I also think this should be realized rather than OOA. But we may have different > views of "produce". By "produce" I mean that the RTL description is produced as the output of the RD. My fundamental architecture is RTL; the population of this architecture is the hardware design. Actually, the RD my also need to output the synthesis constraints files, assuming these can be derived from high level interface specifications. > In our context I am viewing the OOA as a description of the hardware > design. I would want the OOA to describe the behaviour of the device. The fact that its a hardware design *should be* irrelevent. > The RD would translate this into the appropriate drawings, > specifications, etc. for fabrication. I assume that when you produce RTL you want the > translator to create it from the OOA. I would regard the RTL as something a transform > does from a higher level abstraction and the translator would simply package that output > with the other documents. It sounds nice and simple, doesn't it. But what deos the translator do? While its reasonably straight forward to say that every attribute is a register; and actions are combinatorial logic: such a view is far too simplistic. Firstly, you need to think about creation and deletion of objects. Do you assign a register to a specific instance of an attribute; or do you maintain a pool of registers to be assigned to a limited number of objects? Then you need to think about events. Can you compress an event into combinatorial logic, or does it need a register? What happens if 2 events are generated to an instance simultaneously? How many event queues, if any? How big does each event queue need to be? Is a transform so complex that it needs multiple clock cycles? If so, to you use a multicycle path, or do you add registers? If the latter, do you pipeline state actions? Or is the pipeline orthogonal? When you sit down and analyse the design decisions made by an ASIC designer, you find that its very complex (not really suprising). RD tells us to construct an adequate model of the design process; I don't like to prejudge the results; however, I think that OOA-to-RTL is a bit too far to jump in one step. -- Dave Whipp, Siemens AG: HL DC PE MC, MchM, Munich, Germany mailto:David.Whipp@hl.Siemens.de Phone. +49 89 636 83743 Opinions are my own. Factual statements may be incorrect. lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Raistrick... > Those of you following the Design Patterns thread may find the following > patterns useful. They are available for free download at http:/www.kc.com. Alas, you did not mention where they were on the site. Since you mentioned it was a presentation, I assumed it was from SMUG. But I didn't see any titles there that indicated patterns. I also checked the download area and didn't see anything likely. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com peterf@pathfindersol.com (Peter J. Fontana) writes to shlaer-mellor-users: -------------------------------------------------------------------- At 01:11 PM 12/18/98 -0500, shlaer-mellor-users@projtech.com wrote: >lahman writes to shlaer-mellor-users: >... Otherwise I may have to take my karma into the shop. Finally - this thread takes a turn to a level of concreteness that I can finally grab onto. ;-) Hey - to all you ESMUGgers out there taking off for the Holidays - have a happy and safe one! _______________________________________________________ Pathfinder Solutions Inc. www.pathfindersol.com | 888-OOA-PATH | | effective solutions for software engineering challenges| | Peter Fontana voice: +01 508-384-1392 | peterf@pathfindersol.com fax: +01 508-384-7906 | _______________________________________________________| lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Raistrick... > Those of you following the Design Patterns thread may find the following > patterns useful. They are available for free download at http:/www.kc.com. Alas, you did not mention where they were on the site. Since you mentioned it was a presentation, I assumed it was from SMUG. But I didn't see any titles there that indicated patterns. I also checked the download area and didn't see anything likely. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- lahman writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to Raistrick... > Those of you following the Design Patterns thread may find the following > patterns useful. They are available for free download at http:/www.kc.com. Alas, you did not mention where they were on the site. Since you mentioned it was a presentation, I assumed it was from SMUG. But I didn't see any titles there that indicated patterns. I also checked the download area and didn't see anything likely. -- H. S. Lahman There is nothing wrong with me that Teradyne/ATD could not be cured by a capful of Drano 179 Lincoln St. L51 Boston, MA 02111-2473 (Tel) (617)-422-3842 (Fax) (617)-422-3100 lahman@atb.teradyne.com "Nau, Peter" writes to shlaer-mellor-users: -------------------------------------------------------------------- This looks promising: http://www.kc.com/download/OOA_Patterns.pdf > -----Original Message----- > From: lahman [SMTP:lahman@atb.teradyne.com] > Sent: Friday, December 18, 1998 11:38 AM > To: shlaer-mellor-users@projtech.com > Subject: Re: (SMU) Design Patterns > > lahman writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > lahman writes to shlaer-mellor-users: > -------------------------------------------------------------------- > > Responding to Raistrick... > > > Those of you following the Design Patterns thread may find the following > > patterns useful. They are available for free download at > http:/www.kc.com. > > Alas, you did not mention where they were on the site. Since you > mentioned it > was a presentation, I assumed it was from SMUG. But I didn't see any > titles > there that indicated patterns. I also checked the download area and > didn't > see anything likely. > > -- > H. S. Lahman There is nothing wrong with me that > Teradyne/ATD could not be cured by a capful of Drano > 179 Lincoln St. L51 > Boston, MA 02111-2473 > (Tel) (617)-422-3842 > (Fax) (617)-422-3100 > lahman@atb.teradyne.com > > > Customer Support writes to shlaer-mellor-users: -------------------------------------------------------------------- MONTHLY ADMINISTRATIVE POSTING FOR SHLAER-MELLOR-USERS MAILING LIST Description =========== The Shlaer-Mellor-Users mailing list is for discussions about using the Shlaer-Mellor Method of software development. Examples of relevant topics include questions about how to deal with difficult modeling issues, how to think about domains, and how to deal with software architecture design problems. This mailing list is unmoderated and is sponsored by Project Technology. For more information on Project Technology or the Shlaer-Mellor Method, visit its Web site at http://www.projtech.com. Posting ======= Post messages to the mailing list by sending them to shlaer-mellor-users@projtech.com. Only subscribers may post to this forum. Replying ======== Replies to messages from the mailing list will be distributed to mailing list subscribers. If you wish to respond to the poster by e-mail, be sure to change the To: address of your message. Messages distributed by the mailing list begin with an attribution line user@domain writes to shlaer-mellor-users: ------------------------------------------ to make it more apparent that the message came from the mailing list, and to provide the sender's e-mail address in case he/she left it out of the body of the message. Policies ======== You can subscribe a local redistribution list or a gateway to a local newsgroup, as long as whatever you do is local to your site. This restriction makes it much easier for us to track down mailer problems. We are very aggressive when it comes to bounced e-mail. If e-mail to you starts bouncing, we'll probably drop you from the list fairly quickly; you'll have to resubscribe when you get the problem fixed, and retrieve the archives to find out what you missed. Industry consultants and tool vendors are welcome and encouraged to answer questions. Product mentions by tool vendors and blatent advertisements by consultants are prohibited. Use of this forum for the collection of email addresses for any purpose is strictly prohibited. Subscription ============ To subscribe send a message to majordomo@projtech.com with subscribe shlaer-mellor-users in the _body_ of the message. The e-mail address is optional; if not provided, you will be subscribed under the address from which you sent the message. Unsubscription ============== To unsubscribe send a message to majordomo@projtech.com with unsubscribe shlaer-mellor-users in the _body_ of the message. You must unsubscribe using the exact same address that you subscribed with, either the default from which you sent the message, or an explicit address in the optional address field. Archives ======== All messages to the list are archived. The archives are available via majordomo using the "get" command (send "help" in the body of a message to "majordomo@projtech.com" for more info). The archives are broken down by year and month, and are stored in files named "archive.YYMM". Digests ======= There is a companion mailing list that distributes digests (approximately weekly) of the postings to this mailing list. To subscribe or unsubscribe to the digest mailing list, substitute "shlaer-mellor-users-digest" for "shlaer-mellor-users" in the above instructions. You do not need to subscribe to both; they distribute the same information. Digests are also archived and are stored in files named "v00.n000", using the volume and number of the digest. Contacts ======== Questions or suggestions about mailing list content, usage, etc. should be sent to Project Technology Sales & Marketing sales@projtech.com Technical problems with subscriptions, unsubscriptions, bounced messages, etc. should be sent to support@projtech.com Customer Support smf@cix.compulink.co.uk (Mike Finn) writes to shlaer-mellor-users: -------------------------------------------------------------------- Responding to lahman... > > The order in which external events are processed is not changed by > > warping. > True, but the order of external and internal events _is_ changed. No, this is never true in my architecture. But the order in which internal events are generated is changed, which is really the point. > It is fair for an OOA to > rely upon a problem space constraint that orders the external events. Since self-directed > events events take precedence, the OOA might use them to set a state machine to the proper > state to accept the second event. Self-directed events are always internal events, so they're always processed before the next external event is even looked at. > I speculate that with some thought one could come up with a > convoluted example to demonstrate a sequencing problem. [...] > Suppose some action in the initiated processing generates two events. I'll assume you mean these events are generated during the warp. > One is the event that > announces it has completed the processing the originator wanted. The originator does not require (and must not get sent) such an event. BTW, states concerned with maintaining the counter in the originator are also unnecessary. > The other event starts a > whole new chain of unrelated (to the originator) processing. The new chain must be related since it was started during the warp. > That unrelated processing will > continue to throw events on the queue preventing the exit from warp even after all the > processing that the originator was really interested in was done. No matter how many internal events are created during the warp the internal event queue will always empty eventually. This is the only condition required for ending the warp. > Eventually you will either > get a deadlock or an event to an invalid state. The key thing here is that the originating instance must never be made to process another event while it is still in warp and processing the first event. Mike PS. I'm slightly uncomfortable defending this technique that requires the OOA model to be altered (simplified - by removing the counter attribute and synchronizing states and events) and which I know cannot work for all architectures. I'm still of the opinion that the best approach to solve this type of problem is to determine *why* the originating instance cannot move to the next state until the generated threads are done, though this may be difficult... -- Mike Finn Dark Matter | Email: smf@cix.co.uk Systems Ltd | Voice: +44 (0) 1483 755145 Steve Mellor writes to shlaer-mellor-users: -------------------------------------------------------------------- Hello Everyone, It is my sad duty to inform you all that Sally died at our home on November 12th of an apparent heart attack. Our friend and colleague Meilir Page-Jones has written a moving (IMO, extraordinary) obituary that is available on our web site at: http://www.projtech.com/sally.html We are working now to place it in the various trade magazines. Sally always read all the e-SMUG messages, and she often commented on the high quality of the commentary that she saw, particularly when she compared it (unfavorably) to the quality of some other user groups. It always pleased her so when people "got it" and understood what we are trying to do with executable modelling and translative mappings. As you all aware, our work is not yet complete, but I intend to carry on and to make our results available as soon as I can. It is a sad irony that I will now have more time to work on her greatest project, Recursive Design, now that I don't need to look after her. -- steve mellor PS I must apologize for the delay. It's been a long six weeks. Duane Perry writes to shlaer-mellor-users: -------------------------------------------------------------------- My family will say a prayer for Sally tonight. Sorry for your loss. The Perry family.