An input processing card detects signal type. An output processing card needs to know incoming signal type to produce the correct output. Between the banks of these two types of cards is a signal router. Functionally, the user wants to route inputs to outputs and expects the outputs to correspond to input signal type automatically. For non-functional reasons, the system consists of three different kinds of cards, each with processing and communication capability.
There are lots of these sorts of cases where different processing units all supply information to each other to get an overall job done.
Lahman has opined that if you have objects in the same domain existing on separate processors, you’ve “screwed up” the domain partitioning. He emphasizes not to forget about the ‘D’ in OOAD. Non-functional requirements, like a distributed processing environment, can affect the models. Still, I think that is a matter of practicality and if we had model compilers with effective marks we’d take advantage of abstracting to this degree.
Of course, if we cannot figure out how to build the compiler, we have to break the problem down and couple the structure of the functional model to architecture.
I’m trying to understand to what degree the model can be architecture independent. To me, the more the model can be independent of non-functional requirements and constraints, the more reusable it will be. Is there something inherent in the method that says you cannot abstract above a distributed architecture?