by Eric Evans
In Analysis Patterns (Fowler 1996, pp. 24–27), the pattern emerges from a discussion of modeling accountability within organizations, and it is later applied to posting rules in accounting. Although the pattern appears in several chapters, it doesn’t have a chapter of its own because it is different from most patterns in the book. Rather than modeling a domain, as the other analysis patterns do, KNOWLEDGE LEVEL structures a model.
To see the problem concretely, consider models of “accountability.” Organizations are made up of people and smaller organizations, and define the roles they play and the relationships between them. The rules governing those roles and relationships vary greatly for different organizations. At one company, a “department” might be headed by a “Director” who reports to a “Vice President.” In another company, a “module” is headed by a “Manager” who reports to a “Senior Manager.” Then there are “matrix” organizations, in which each person reports to different managers for different purposes.
A typical application would make some assumptions. When those didn’t fit, users would start to use data-entry fields in a different way than they were intended. Any behavior the application had would misfire, as the semantics were changed by the users. Users would develop workarounds for the behavior, or would get the higher level features of the application shut off. They would be forced to learn complicated mappings between what they did in their jobs and the way the software works. They would never be served well.
When the system had to be changed or replaced, developers would discover (sooner or later) that the meanings of the features were not what they seemed. They might mean very different things in different user communities or in different situations. Changing anything without breaking these overlaid usages would be daunting. Data migration to a more tailored system would require understanding and coding for all those quirks.
Example: Employee Payroll and Pension, Part 1
The HR department of a medium-sized company has a simple program for calculating payroll and pension contributions.
Figure 16.15. The old model, overconstrained for new requirements
Figure 16.16. Some employees represented using the old model
But now, the management has decided that the office administrators should go into the “defined benefit” retirement plan. The trouble is that office administrators are paid hourly, and this model does not allow mixing. The model will have to change.
The next model proposal is quite simple: just remove the constraints.
Figure 16.17. The proposed model, now underconstrained
Figure 16.18. Employees can be associated with the wrong plan.
This model allows each employee to be associated with either kind of retirement plan, so each office administrator can be switched. This model is rejected by management because it does not reflect company policy. Some administrators could be switched and others not. Or the janitor could be switched. Management wants a model that enforces the policy:
Office administrators are hourly employees with defined-benefit retirement plans.
This policy suggests that the “job title” field now represents an important domain concept. Developers could refactor to make that concept explicit as an “Employee Type.”
Figure 16.19. The Type object allows requirements to be met.
Figure 16.20. Each Employee Type is assigned a Retirement Plan.
The requirements can be stated in the UBIQUITOUS LANGUAGE as follows:
An Employee Type is assigned to either Retirement Plan or either payroll.
Employees are constrained by the Employee Type.
Access to edit the Employee Type object will be restricted to a “superuser,” who will make changes only when company policy changes. An ordinary user in the personnel department can change Employees or point them at a different Employee Type.
This model satisfies the requirements. The developers sense an implicit concept or two, but it is just a nagging feeling at the moment. They don’t have any solid ideas to pursue, so they call it a day.
A static model can cause problems. But problems can be just as bad with a fully flexible system that allows any possible relationship to be presented. Such a system would be inconvenient to use and wouldn’t allow the organization’s own rules to be enforced.
Fully customizing software for each organization is not practical because, even if each organization could pay for custom software, the organizational structure will likely change frequently.
So such software must provide options to allow the user to configure it to reflect the current structure of the organization. The trouble is that adding such options to the model objects makes them unwieldy. The more flexibility you add, the more complex it all becomes.
In an application in which the roles and relationships between ENTITIES vary in different situations, complexity can explode. Neither fully general models nor highly customized ones serve the users’ needs. Objects end up with references to other types to cover a variety of cases, or with attributes that are used in different ways in different situations. Classes that have the same data and behavior may multiply just to accommodate different assembly rules.
Nestled into our model is another model that is about our model. A KNOWLEDGE LEVEL separates that self-defining aspect of the model and makes its constraints explicit.
KNOWLEDGE LEVEL is an application to the domain layer of the REFLECTION pattern, used in many software architectures and technical infrastructures and described well in Buschmann et al. 1996. REFLECTION accommodates changing needs by making the software “self-aware,” and making selected aspects of its structure and behavior accessible for adaptation and change. This is done by splitting the software into a “base level,” which carries the operational responsibility for the application, and a “meta level,” which represents knowledge of the structure and behavior of the software.
Significantly, the pattern is not called a knowledge “layer.” As much as it resembles layering, REFLECTION involves mutual dependencies running in both directions.
Java has some minimal built-in REFLECTION in the form of protocols for interrogating a class for its methods and so forth. Such mechanisms allow a program to ask questions about its own design. CORBA has somewhat more extensive but similar REFLECTION protocols. Some persistence technologies extend the richness of that self-description to support partially automated mapping between database tables and objects. There are other technical examples. This pattern can also be applied within the domain layer.
Comparing the terminology of KNOWLEDGE LEVEL and REFLECTION
Just to be clear, the reflection tools of the programming language are not for use in implementing the KNOWLEDGE LEVEL of a domain model. Those meta-objects describe the structure and behavior of the language constructs themselves. Instead, the KNOWLEDGE LEVEL must be built of ordinary objects.
The KNOWLEDGE LEVEL provides two useful distinctions. First, it focuses on the application domain, in contrast to familiar uses of REFLECTION. Second, it does not strive for full generality. Just as a SPECIFICATION can be more useful than a general predicate, a very specialized set of constraints on a set of objects and their relationships can be more useful than a generalized framework. The KNOWLEDGE LEVEL is simpler and can communicate the specific intent of the designer.
Therefore:
Create a distinct set of objects that can be used to describe and constrain the structure and behavior of the basic model. Keep these concerns separate as two “levels,” one very concrete, the other reflecting rules and knowledge that a user or superuser is able to customize.
Like all powerful ideas, REFLECTION and KNOWLEDGE LEVELS can be intoxicating. This pattern should be used sparingly. It can unravel complexity by freeing operations objects from the need to be jacks-of-all-trades, but the indirection it introduces does add some of that obscurity back in. If the KNOWLEDGE LEVEL becomes complex, the system’s behavior becomes hard to understand for developers and users alike. The users (or superuser)
who configure it will end up needing the skills of a programmer—and a meta-level programmer at that. If they make mistakes, the application will behave incorrectly.
Also, the basic problems of data migration don’t completely disappear. When a structure in the KNOWLEDGE LEVEL is changed, existing operations-level objects have to be dealt with. It may be possible for old and new to coexist, but one way or another, careful analysis is needed.
All of these issues put a major burden on the designer of a KNOWLEDGE LEVEL. The design has to be robust enough to handle not only the scenarios presented in development, but also any scenario for which a user could configure the software in the future. Applied judiciously, to the points where customization is crucial and would otherwise distort the design, KNOWLEDGE LEVELS can solve problems that are very hard to handle any other way.
Example: Employee Payroll and Pension, Part 2: KNOWLEDGE LEVEL
Our team members are back, and, refreshed from a night’s sleep, one of them has started to close in on one of the awkward points. Why were certain objects being secured while others were freely edited? The cluster of restricted objects reminded him of the KNOWLEDGE LEVEL pattern, and he decided to try it as a way of viewing the model. He found that the existing model could already be viewed this way.
Figure 16.21. Recognizing the KNOWLEDGE LEVEL implicit in the existing model
The restricted edits were in the KNOWLEDGE LEVEL, while the day-to-day edits were in the operational level. A nice fit. All the objects above the line described types or longstanding policies. The Employee Type effectively imposed behavior on the Employee.
The developer was sharing his insight with his colleagues when one of the other developers had another insight. The clarity of seeing the model organized by KNOWLEDGE LEVEL had let her spot what had been bothering her the previous day. Two distinct concepts were being combined in the same object. She had heard it in the language used on the previous day but hadn’t put her finger on it:
An Employee Type is assigned to either Retirement Plan or either payroll.
But that was not really a statement in the UBIQUITOUS LANGUAGE. There was no “payroll” in the model. They had spoken in the language they wanted, rather than the one they had. The concept of payroll was implicit in the model, lumped together with Employee Type. It hadn’t been so obvious before the KNOWLEDGE LEVEL was separated out, and the very elements in that key phrase all appeared in the same level together . . . except one.
Based on this insight, she refactored again to a model that does support that statement.
The need for user control of the rules for associating objects drove the team to a model that had an implicit KNOWLEDGE LEVEL.
Figure 16.22. Payroll is now explicit, distinct from Employee Type.
Figure 16.23. Each Employee Type now has a Retirement Plan and a Payroll.
KNOWLEDGE LEVEL was hinted at by the characteristic access restrictions and a “thing-thing” type relationship. Once it was in place, the clarity it afforded helped produce another insight that disentangled two important domain concepts by factoring out Payroll.
KNOWLEDGE LEVEL, like other large-scale structures, isn’t strictly necessary. The objects will still work without it, and the insight that separated Employee Type from Payroll could still have been found and used. There may come a time when this structure doesn’t seem to be pulling its weight and can be dropped. But for now, it seems to tell a useful story about the system and helps developers grapple with the model.
At first glance, KNOWLEDGE LEVEL looks like a special case of RESPONSIBILITY LAYERS, especially the “policy” layer, but it is not. For one thing, dependencies run in both directions between the levels, but with LAYERS, lower layers are independent of upper layers.
In fact, KNOWLEDGE LEVEL can coexist with most other large-scale structures, providing an additional dimension of organization.
Pluggable Component Framework
Opportunities arise in a very mature model that is deep and distilled. A PLUGGABLE COMPONENT FRAMEWORK usually only comes into play after a few applications have already been implemented in the same domain.
When a variety of applications have to interoperate, all based on the same abstractions but designed independently, translations between multiple BOUNDED CONTEXTS limit integration. A SHARED KERNEL is not feasible for teams that do not work closely together. Duplication and fragmentation raise costs of development and installation, and interoperability becomes very difficult.
Some successful projects break down their design into components, each with responsibility for certain categories of functions. Usually all the components plug into a central hub, which supports any protocols they need and knows how to talk to the interfaces they provide. Other patterns of connecting components are also possible. The design of these interfaces and the hub that connects them must be coordinated, while more independence is possible designing the interiors.
Several widely used technical frameworks support this pattern, but that is a secondary issue. A technical framework is needed only if it solves some essential technical problem such as distribution, or sharing a component among different applications. The basic pattern is a conceptual organization of responsibilities. It can easily be applied within a single Java program.
Therefore:
Distill an ABSTRACT CORE of interfaces and interactions and create a framework that allows diverse implementations of those interfaces to be freely substituted. Likewise, allow any application to use those components, so long as it operates strictly through the interfaces of the ABSTRACT CORE.
High-level abstractions are identified and shared across the breadth of the system; specialization occurs in MODULES. The central hub of the application is an ABSTRACT CORE within a SHARED KERNEL. But multiple BOUNDED CONTEXTS can lie behind the encapsulated component interfaces, so that this structure can be especially convenient when many components are coming from many different sources, or when components are encapsulating preexisting software for integration.
This is not to say that components must have divergent models. Multiple components can be developed within a single CONTEXT if the teams CONTINUOUSLY INTEGRATE, or they can define another SHARED KERNEL held in common by a closely related set of components. All these strategies can coexist easily within a large-scale structure of PLUGGABLE COMPONENTS. Another option, in some cases, is to use a PUBLISHED LANGUAGE for the plug-in interface of the hub.
There are a few downsides to a PLUGGABLE COMPONENT FRAMEWORK. One is that this is a very difficult pattern to apply. It requires precision in the design of the interfaces and a deep enough model to capture the necessary behavior in the ABSTRACT CORE. Another major downside is that applications have limited options. If an application needs a very different approach to the CORE DOMAIN, the structure will get in the way. Developers can specialize the model, but they can’t change the ABSTRACT CORE without changing the protocol of all the diverse components. As a result, the process of continuous refinement of the CORE, refactoring toward deeper insight, is more or less frozen in its tracks.
Fayad and Johnson (2000) give a good look at ambitious attempts at PLUGGABLE COMPONENT FRAMEWORKS in several domains, including a discussion of SEMATECH CIM. The success of such frameworks is a mixed story. Probably the biggest obstacle is the maturity of understanding needed to design a useful framework. A PLUGGABLE COMPONENT FRAMEWORK should not be the first large-scale structure applied on a project, nor the second. The most successful examples have followed after the full development of multiple specialized applications.
Example: The SEMATECH CIM Framework
In a factory producing computer chips, groups (called lots) of silicon wafers are moved from one machine to another through hundreds of steps of processing until the microscopic circuitry being printed and etched into them is complete. The factory needs software that can track each individual lot, recording the exact processing that has been done to it, and then direct either factory workers or automated equipment to take it
to the next appropriate machine and apply the next appropriate process. Such software is called a manufacturing execution system (MES).
Hundreds of different machines from dozens of vendors are used, with carefully tailored recipes at each step of the way. Developing MES software that could deal with such a complex mix was daunting and prohibitively expensive. In response, an industry consortium, SEMATECH, developed the CIM Framework.
The CIM Framework is big and complicated and has many aspects, but two are relevant here. First, the framework defines abstract interfaces for the basic concepts of the semiconductor MES domain—in other words, the CORE DOMAIN in the form of an ABSTRACT CORE. These interface definitions include both behavior and semantics.
Figure 16.24. A highly simplified subset of the CIM interfaces, with sample implementations
If a vendor produces a new machine, they have to develop a specialized implementation of the Process Machine interface. If they adhere to that interface, their machine-control component should plug into any application based on the CIM Framework.
Having defined these interfaces, SEMATECH defined the rules by which they could interact in an application. Any application based on the CIM Framework would have to implement a protocol that hosted objects implementing some subset of those interfaces. If this protocol were implemented, and the application strictly observed the abstract interfaces, then the application could count on the promised services of those interfaces, regardless of implementation. The combination of those interfaces and the protocol for using them constitutes a tightly restrictive large-scale structure.