Domain-Driven Design

Home > Other > Domain-Driven Design > Page 32
Domain-Driven Design Page 32

by Eric Evans


  A few days later, mysterious problems surfaced in the bill-payment application module for which the Charge had originally been written. Strange Charges appeared that no one remembered entering and that didn’t make any sense. The program began to crash when some functions were used, particularly the month-to-date tax report. Investigation revealed that the crash resulted when a function was used that summed up the amount deductible for all the current month’s payments. The mystery records had no value in the “percent deductible” field, although the validation of the data-entry application required it and even put in a default value.

  The problem was that these two groups had different models, but they did not realize it, and there were no processes in place to detect it. Each made assumptions about the nature of a charge that were useful in their context (billing customers versus paying vendors). When their code was combined without resolving these contradictions, the result was unreliable software.

  If only they had been more aware of this reality, they could have consciously decided how to deal with it. That might have meant working together to hammer out a common model and then writing an automated test suite to prevent future surprises. Or it might simply have meant an agreement to develop separate models and keep hands off each other’s code. Either way, it starts with an explicit agreement on the boundaries within which each model applies.

  What did they do once they knew about the problem? They created separate Customer Charge and Supplier Charge classes and defined each according to the needs of the corresponding team. The immediate problem having been solved, they went back to doing things just as before. Oh well.

  Although we seldom think about it explicitly, the most fundamental requirement of a model is that it be internally consistent; that its terms always have the same meaning, and that it contain no contradictory rules. The internal consistency of a model, such that each term is unambiguous and no rules contradict, is called unification. A model is meaningless unless it is logically consistent. In an ideal world, we would have a single model spanning the whole domain of the enterprise. This model would be unified, without any contradictory or overlapping definitions of terms. Every logical statement about the domain would be consistent.

  But the world of large systems development is not the ideal world. To maintain that level of unification in an entire enterprise system is more trouble than it is worth. It is necessary to allow multiple models to develop in different parts of the system, but we need to make careful choices about which parts of the system will be allowed to diverge and what their relationship to each other will be. We need ways of keeping crucial parts of the model tightly unified. None of this happens by itself or through good intentions. It happens only through conscious design decisions and institution of specific processes. Total unification of the domain model for a large system will not be feasible or cost-effective.

  Sometimes people fight this fact. Most people see the price that multiple models exact by limiting integration and making communication cumbersome. On top of that, having more than one model somehow seems inelegant. This resistance to multiple models sometimes leads to very ambitious attempts to unify all the software in a large project under a single model. I know I’ve been guilty of this kind of overreaching. But consider the risks.

  1. Too many legacy replacements may be attempted at once.

  2. Large projects may bog down because the coordination overhead exceeds their abilities.

  3. Applications with specialized requirements may have to use models that don’t fully satisfy their needs, forcing them to put behavior elsewhere.

  4. Conversely, attempting to satisfy everyone with a single model may lead to complex options that make the model difficult to use.

  What’s more, model divergences are as likely to come from political fragmentation and differing management priorities as from technical concerns. And the emergence of different models can be a result of team organization and development process. So even when no technical factor prevents full integration, the project may still face multiple models.

  Given that it isn’t feasible to maintain a unified model for an entire enterprise, we don’t have to leave ourselves at the mercy of events. Through a combination of proactive decisions about what should be unified and pragmatic recognition of what is not unified, we can create a clear, shared picture of the situation. With that in hand, we can set about making sure that the parts we want to unify stay that way, and the parts that are not unified don’t cause confusion or corruption.

  We need a way to mark the boundaries and relationships between different models. We need to choose our strategy consciously and then follow our strategy consistently.

  This chapter lays out techniques for recognizing, communicating, and choosing the limits of a model and its relationships to others. It all starts with mapping the current terrain of the project. A BOUNDED CONTEXT defines the range of applicability of each model, while a CONTEXT MAP gives a global overview of the project’s contexts and the relationships between them. This reduction of ambiguity will, in and of itself, change the way things happen on the project, but it isn’t necessarily enough. Once we have a CONTEXT BOUNDED, a process of CONTINUOUS INTEGRATION will keep the model unified.

  Then, starting from this stable situation, we can start to migrate toward more effective strategies for BOUNDING CONTEXTS and relating them, ranging from closely allied contexts with SHARED KERNELS to loosely coupled models that go their SEPARATE WAYS.

  Figure 14.1. A navigation map for model integrity patterns

  Bounded Context

  Cells can exist because their membranes define what is in and out and determine what can pass.

  Multiple models coexist on big projects, and this works fine in many cases. Different models apply in different contexts. For example, you may have to integrate your new software with an external system over which your team has no control. A situation like this is probably clear to everyone as a distinct context where the model under development doesn’t apply, but other situations can be more vague and confusing. In the story that opened this chapter, two teams were working on different functionality for the same new system. Were they working on the same model? Their intention was to share at least part of what they did, but there was no demarcation to tell them what they did or did not share. And they had no process in place to hold a shared model together or quickly detect divergences. They realized they had diverged only after their system’s behavior suddenly became unpredictable.

  Even a single team can end up with multiple models. Communication can lapse, leading to subtly conflicting interpretations of the model. Older code often reflects an earlier conception of the model that is subtly different from the current model.

  Everyone is aware that the data format of another system is different and calls for a data conversion, but this is only the mechanical dimension of the problem. More fundamental is the difference in the models implicit in the two systems. When the discrepancy is not with an external system, but within the same code base, it is even less likely to be recognized. Yet this happens on all large team projects.

  Multiple models are in play on any large project. Yet when code based on distinct models is combined, software becomes buggy, unreliable, and difficult to understand. Communication among team members becomes confused. It is often unclear in what context a model should not be applied.

  Failure to keep things straight is ultimately revealed when the running code doesn’t work right, but the problem starts in the way teams are organized and the way people interact. Therefore, to clarify the context of a model, we have to look at both the project and its end products (code, database schemas, and so on).

  A model applies in a context. The context may be a certain part of the code, or the work of a particular team. For a model invented in a brainstorming session, the context could be limited to that particular conversation. The context of a model used in an example in this book is that particular example section and any later discussion of
it. The model context is whatever set of conditions must apply in order to be able to say that the terms in a model have a specific meaning.

  To begin to solve the problems of multiple models, we need to define explicitly the scope of a particular model as a bounded part of a software system within which a single model will apply and will be kept as unified as possible. This definition has to be reconciled with the team organization.

  Therefore:

  Explicitly define the context within which a model applies. Explicitly set boundaries in terms of team organization, usage within specific parts of the application, and physical manifestations such as code bases and database schemas. Keep the model strictly consistent within these bounds, but don’t be distracted or confused by issues outside.

  A BOUNDED CONTEXT delimits the applicability of a particular model so that team members have a clear and shared understanding of what has to be consistent and how it relates to other CONTEXTS. Within that CONTEXT, work to keep the model logically unified, but do not worry about applicability outside those bounds. In other CONTEXTS, other models apply, with differences in terminology, in concepts and rules, and in dialects of the UBIQUITOUS LANGUAGE. By drawing an explicit boundary, you can keep the model pure, and therefore potent, where it is applicable. At the same time, you avoid confusion when shifting your attention to other CONTEXTS. Integration across the boundaries necessarily will involve some translation, which you can analyze explicitly.

  * * *

  BOUNDED CONTEXTS Are Not MODULES

  The issues are confused sometimes, but these are different patterns with different motivations. True, when two sets of objects are recognized as making up different models, they are almost always placed in separate MODULES. Doing so does provide different name spaces (essential for different CONTEXTS) and some demarcation.

  But MODULES also organize the elements within one model; they don’t necessarily communicate an intention to separate CONTEXTS. The separate name spaces that MODULES create within a BOUNDED CONTEXT actually make it harder to spot accidental model fragmentation.

  * * *

  Example: Booking Context

  A shipping company has an internal project to develop a new application for booking cargo. This application is to be driven by an object model. What is the BOUNDED CONTEXT within which this model applies? To answer this question, we have to look at what is happening on the project. Keep in mind, this is a look at the project as it is, not as it ideally should be.

  One project team is working on the booking application itself. They are not expected to modify the model objects, but the application they are building has to display and manipulate those objects. This team is a consumer of the model. The model is valid within the application (its primary consumer), and therefore the booking application is in bounds.

  The completed bookings have to be passed to the legacy cargotracking system. A decision was made up front that the new model would depart from that of the legacy, so the legacy cargotracking system is outside the boundary. Necessary translation between the new model and the legacy is to be the responsibility of the legacy maintenance team. The translation mechanism is not driven by the model. It is not in the BOUNDED CONTEXT. (It is part of the boundary itself, which will be discussed in CONTEXT MAP.) It is good that translation is out of CONTEXT (not based on the model). It would be unrealistic to ask the legacy team to make any real use of the model because their primary work is out of CONTEXT.

  The team responsible for the model deals with the whole life cycle of each object, including persistence. Because this team has control of the database schema, they’ve been deliberately keeping the object-relational mapping straightforward. In other words, the schema is being driven by the model and therefore is in bounds.

  Yet another team is working on a model and application for scheduling the voyages of the cargo ships. The scheduling and booking teams were initiated together, and both teams had intended to produce a single, unified system. The two teams have casually coordinated with each other, and they occasionally share objects, but they are not systematic about it. They are not working in the same BOUNDED CONTEXT. This is a risk, because they do not think of themselves as working on separate models. To the extent they integrate, there will be problems unless they put in place processes to manage the situation. (The SHARED KERNEL, discussed later in this chapter, might be a good choice.) The first step, though, is to recognize the situation as it is. They are not in the same CONTEXT and should stop trying to share code until some changes are made.

  This BOUNDED CONTEXT is made up of all those aspects of the system that are driven by this particular model: the model objects, the database schema that persists the model objects, and the booking application. Two teams work primarily in this CONTEXT: the modeling team and the application team. Information has to be exchanged with the legacy tracking system, and the legacy team has primary responsibility for the translation at this boundary, with cooperation from the modeling team. There is no clearly defined relationship between the booking model and the voyage schedule model, and defining that relationship should be one of those teams’ first actions. In the meantime, they should be very careful about sharing code or data.

  So, what has been gained by defining this BOUNDED CONTEXT? For the teams working in CONTEXT: clarity. Those two teams know they must stay consistent with one model. They make design decisions in that knowledge and watch for fractures. For the teams outside: freedom. They don’t have to walk in the gray zone, not using the same model, yet somehow feeling they should. But the most concrete gain in this particular case is probably realizing the risk of the informal sharing between the booking model team and the voyage schedule team. To avoid problems, they really need to decide on the cost/benefit trade-offs of sharing and put in processes to make it work. This won’t happen unless everyone understands where the bounds of the model contexts are.

  Of course, boundaries are special places. The relationships between a BOUNDED CONTEXT and its neighbors require care and attention. The CONTEXT MAP charts the territory, giving the big picture of the CONTEXTS and their connections, while several patterns define the nature of the various relationships between CONTEXTS. And a process of CONTINUOUS INTEGRATION preserves unity of the model within a BOUNDED CONTEXT.

  But before proceeding to all that, what does it look like when unification of a model is breaking down? How do you recognize conceptual splinters?

  Recognizing Splinters Within a BOUNDED CONTEXT

  Many symptoms may indicate unrecognized model differences. Some of the most obvious are when coded interfaces don’t match up. More subtly, unexpected behavior is a likely sign. The CONTINUOUS INTEGRATION process with automated tests can help catch these kinds of problems. But the early warning is usually a confusion of language.

  Combining elements of distinct models causes two categories of problems: duplicate concepts and false cognates. Duplication of concepts means that there are two model elements (and attendant implementations) that actually represent the same concept. Every time this information changes, it has to be updated in two places with conversions. Every time new knowledge leads to a change in one of the objects, the other has to be reanalyzed and changed too. Except the reanalysis doesn’t happen in reality, so the result is two versions of the same concept that follow different rules and even have different data. On top of that, the team members must learn not one but two ways of doing the same thing, along with all the ways they are being synchronized.

  False cognates may be slightly less common, but more insidiously harmful. This is the case when two people who are using the same term (or implemented object) think they are talking about the same thing, but really are not. The example in the beginning of this chapter (two different business activities both called Charge) is typical, but conflicts can be even subtler when the two definitions are actually related to the same aspect in the domain, but have been conceptualized in slightly different ways. False cognates lead to development teams that step on
each other’s code, databases that have weird contradictions, and confusion in communication within the team. The term false cognate is ordinarily applied to natural languages. For example, English speakers learning Spanish often misuse the word embarazada. This word does not mean “embarrassed”; it means “pregnant.” Oops.

  When you detect these problems, your team will have to make a decision. You may want to pull the model back together and refine the processes to prevent fragmentation. Or the fragmentation may be a result of groups who want to pull the model in different directions for good reasons, and you may decide to let them develop independently. Dealing with these issues is the subject of the remaining patterns in this chapter.

  Continuous Integration

  Having defined a BOUNDED CONTEXT, we must keep it sound.

  When a number of people are working in the same BOUNDED CONTEXT, there is a strong tendency for the model to fragment. The bigger the team, the bigger the problem, but as few as three or four people can encounter serious problems. Yet breaking down the system into ever-smaller CONTEXTS eventually loses a valuable level of integration and coherency.

  Sometimes developers do not fully understand the intent of some object or interaction modeled by someone else, and they change it in a way that makes it unusable for its original purpose. Sometimes they don’t realize that the concepts they are working on are already embodied in another part of the model and they duplicate (inexactly) those concepts and behavior. Sometimes they are aware of those other expressions but are afraid to tamper with them, for fear of corrupting the existing functionality, and so they proceed to duplicate concepts and functionality.

 

‹ Prev