Domain-Driven Design
Page 20
Deep Models
The traditional way of explaining object analysis involves identifying nouns and verbs in the requirements documents and using them as the initial objects and methods. This explanation is recognized as an oversimplification that can be useful for teaching object modeling to beginners. The truth is, though, that initial models usually are naive and superficial, based on shallow knowledge.
For example, I once worked on a shipping application for which my initial idea of an object model involved ships and containers. Ships moved from place to place. Containers were associated and disassociated through load and unload operations. That is an accurate description of some physical shipping activities. It does not turn out to be a very useful model for shipping business software.
Eventually, after months working with shipping experts through many iterations, we evolved a quite different model. It was less obvious to a layperson, but much more relevant to the experts. It was refocused on the business of delivering cargo.
The ships were still there, but abstracted in the form of a “vessel voyage,” a particular trip scheduled for a ship, train, or other carrier. The ship itself was secondary, and could be substituted at the last minute for maintenance or a slipping schedule, while the vessel voyage went on as planned. The shipping container all but disappeared from the model. It did emerge in a cargo-handling application in a different, very complex form, but in the context of the original application, the container was an operational detail. The physical movement of the cargo took a back seat to the transfers of legal responsibility for that cargo. Less obvious objects, such as the “bill of lading,” came to the fore.
Whenever new object modelers showed up on the project, what was their first suggestion? The missing classes: ship and container. They were smart people. They just hadn’t gone through the processes of discovery.
A deep model provides a lucid expression of the primary concerns of the domain experts and their most relevant knowledge while it sloughs off the superficial aspects of the domain. This definition doesn’t mention abstraction. A deep model usually has abstract elements, but it may well have concrete elements where those cut to the heart of the problem.
Versatility, simplicity, and explanatory power come from a model that is truly in tune with the domain. One feature such models almost always have is a simple, though possibly abstract, language that the business experts like to use.
Deep Model/Supple Design
In a process of constant refactoring, the design itself needs to support change. Chapter 10 looks at ways to make a design easy to work with, both for those changing it and for those integrating it with other parts of the system.
Certain characteristics of a design make it easier to change and use. They are not complicated, but they are challenging. “Supple design” and ways to approach it are the subjects of Chapter 10.
One bit of luck is that the very act of transforming the model and code again and again—if each change reflects new understanding—can bring about flexibility at just the points where change is most needed, along with easy ways of doing the common things. A well-worn glove becomes supple at the points where the fingers bend, while other parts are stiff and protective. So although there is a lot of trial and error involved in this approach to modeling and design, the changes can actually become easier to make, and the repeated changes actually move us toward a supple design.
In addition to facilitating change, a supple design contributes to the refinement of the model itself. A MODEL-DRIVEN DESIGN stands on two legs. A deep model makes possible an expressive design. At the same time, a design can actually feed insight into the model discovery process when it has the flexibility to let a developer experiment and the clarity to show a developer what is happening. This half of the feedback loop is essential, because the model we are looking for is not just a nice set of ideas: it is the foundation of the system.
The Discovery Process
To create a design really fitted to the problem at hand, you must first have a model that captures the central relevant concepts of the domain. Actively searching for these concepts and bringing them into the design is the subject of Chapter 9, “Making Implicit Concepts Explicit.”
Because of the close relationship between model and design, the modeling process comes to a halt when the code is hard to refactor. Chapter 10, “Supple Design,” discusses how to write software for software developers, not least yourself, so that it is productive to extend and change. This effort goes hand in hand with further refinements to the model. It often entails more advanced design techniques and more rigor in model definitions.
You will usually depend on creativity and trial and error to find good ways to model the concepts you discover, but sometimes someone has laid down a pattern you can follow. Chapters 11 and 12 discuss the application of “analysis patterns” and “design patterns.” Such patterns are not ready-made solutions, but they feed your knowledge crunching process and narrow your search.
But I’ll start Part III with the most exciting event in domain-driven design. Sometimes, when the stage is set with a MODEL-DRIVEN DESIGN and explicit concepts, you have a breakthrough. An opportunity opens up to transform your software into something more expressive and versatile than you expected. This can mean new features or it can just mean the replacement of a big chunk of rigid code with a simple, flexible expression of a deeper model. Although such breakthroughs don’t come along every day, they are so valuable that when they do happen, the opportunity needs to be recognized and grasped.
Chapter 8 tells the true story of a project on which a process of refactoring toward deeper insight led to a breakthrough. This experience is not something you can plan for. Nonetheless, it provides a good context for thinking about domain refactoring.
Eight. Breakthrough
The returns from refactoring are not linear. Usually there is a marginal return for a small effort, and the small improvements add up. They fight entropy, and they are the frontline protection against a fossilized legacy. But some of the most important insights come abruptly and send a shock through the project.
Slowly but surely, the team assimilates knowledge and crunches it into a model. Deep models can emerge gradually through a sequence of small refactorings, an object at a time: a tweaked association here, a shifted responsibility there.
Often, though, continuous refactoring prepares the way for something less orderly. Each refinement of code and model gives developers a clearer view. This clarity creates the potential for a breakthrough of insights. A rush of change leads to a model that corresponds on a deeper level to the realities and priorities of the users. Versatility and explanatory power suddenly increase even as complexity evaporates.
This sort of breakthrough is not a technique; it is an event. The challenge lies in recognizing what is happening and deciding how to deal with it. To convey what this experience feels like, I’ll tell a true story of a project I worked on some years ago, and how we arrived at a very valuable deep model.
Story of a Breakthrough
After a long New York winter of refactoring, we had arrived at a model that captured some of the key knowledge of the domain and a design that did some real work for the application. We were developing a core part of a large application for managing syndicated loans in an investment bank.
When Intel wants to build a billion-dollar factory, they need a loan that is too big for any single lending company to take on, so the lenders form a syndicate that pools its resources to support a facility (see sidebar). An investment bank usually acts as syndicate leader, coordinating transactions and other services. Our project was to build software to track and support this whole process.
A Decent Model, and Yet . . .
We were feeling pretty good. Four months before, we had been in deep trouble with a completely unworkable, inherited code base, which we had since wrestled into a coherent MODEL-DRIVEN DESIGN.
The model reflected in Figure 8.1 makes the common case very simp
le. The Loan Investment is a derived object that represents a particular investor’s contribution to the Loan, proportional to its share in the Facility.
Figure 8.1. A model that assumes lender shares are fixed
* * *
What Is a “Facility”?
A “facility” in this context is not a building. As on most projects, specialized terminology from the domain experts entered our vocabulary and became part of the UBIQUITOUS LANGUAGE. In the domain of commercial banking, a facility is a commitment by a company to lend. Your credit card is a facility that entitles you to borrow on demand up to a prearranged limit at a predetermined interest rate. When you use the card, you create an outstanding loan, and each additional charge is a drawdown against your facility that increases the loan. Finally you pay back the loan principal. You may also pay an annual fee. This is a fee for the privilege of having the card (the facility) and is independent of your loan.
* * *
But there were some disconcerting signs. We kept stumbling over unexpected requirements that complicated the design. A major example was the creeping understanding that the shares in a Facility were only a guideline to participation in any particular loan draw-down. When the borrower requests its money, the leader of the syndicate calls all members for their shares.
When called, the investors usually cough up their share, but often they negotiate with other members of the syndicate and invest less (or more). We had accommodated this by adding Loan Adjustments to the model.
Figure 8.2. A model incrementally changed to solve problems. Loan Adjustments track departures from the share a lender originally agreed to in the Facility.
Refinements of this kind allowed us to keep up as the rules of various transactions became clearer. But complexity was increasing, and we did not seem to be converging quickly onto really solid functionality.
Even more troubling were subtle rounding inconsistencies that we had not been able to squash with increasingly complex algorithms. True, in a $100 million (MM) deal, no one cares about where the extra pennies go, but bankers don’t trust software that cannot meticulously account for those pennies. We began to suspect that our difficulties were symptomatic of a basic design problem.
The Breakthrough
Suddenly one week it dawned on us what was wrong. Our model tied together the Facility and Loan shares in a way that was not appropriate to the business. This revelation had wide repercussions. With the business experts nodding, enthusiastically helping—and, I dare say, wondering what took us so long—we hashed out a new model on a whiteboard. Although the details hadn’t jelled yet, we knew the crucial feature of the new model: shares of the Loan and those of the Facility could change independently of each other. With that insight, we walked through numerous scenarios using a visualization of the new model that looked something like this:
Figure 8.3. A drawdown distributed based on Facility shares
This diagram says that the borrower has chosen to draw an initial $50MM from the $100MM committed under the Facility. The three lenders chip in their shares in exact proportion to the Facility shares, resulting in a $50MM Loan divided among the lenders.
Then, in Figure 8.4, the borrower draws an additional $30MM, bringing his outstanding Loan to $80MM, still under the $100MM limit of the Facility. This time, Company B chooses not to participate, letting Company A take an extra share. The shares of the draw-down reflect these investment choices. When the drawdown amounts are added to the Loan, the shares of the Loan are no longer proportional to the shares of the Facility. This is common.
Figure 8.4. Lender B opts out of a second drawdown.
Figure 8.5. Principal payments are always distributed proportional to shares in the outstanding Loan.
When the borrower pays down the Loan, the money is divided among the lenders according to the shares of the Loan, not the Facility. Likewise, interest payments will be divided according to the Loan shares.
Figure 8.6. Fee payments are always distributed proportionally to shares in the Facility.
On the other hand, when the borrower pays a fee for the privilege of having the Facility available, this money is divided according to the Facility shares, regardless of who actually has lent money. The Loan is unchanged by fee payments. There are even scenarios in which lenders trade shares of fees separately from their shares of interest, and so on.
A Deeper Model
We had two deep insights. First was the realization that our “Investments” and “Loan Investments” were just two special cases of a general and fundamental concept: shares. Shares of a facility, shares of a loan, shares of a payment distribution. Shares, shares everywhere. Shares of any divisible value.
A few tumultuous days later I had sketched a model of shares, drawing on the language used in the discussions with experts and the scenarios we had explored together.
Figure 8.7. An abstract model of shares
I also sketched a new loan model to go with it.
Figure 8.8. The Loan model using Share Pie
There were no longer specialized objects for the shares of a Facility or a Loan. They both were broken down into the more intuitive “Share Pie.” This generalization allowed the introduction of “shares math,” vastly simplifying the calculation of shares in any transaction, and making those calculations more expressive, concise, and easily combined.
But most of all, problems went away because the new model removed an inappropriate constraint. It freed the Loan’s Shares to depart from the proportions of the Facility’s Shares, while keeping in place the valid constraints on totals, fee distributions, and so on. The Share Pie of the Loan could be adjusted directly, so the Loan Adjustment was no longer needed, and a large amount of special-case logic was eliminated.
The Loan Investment had disappeared, and at this point we realized that “loan investment” was not a banking term. In fact, the business experts had told us a number of times that they didn’t understand it. They had deferred to our software knowledge and assumed it was useful to the technical design. Actually, we had created it based on our incomplete understanding of the domain.
Suddenly, on the basis of this new way of looking at the domain, we could run through every scenario we had ever encountered relatively effortlessly, much more simply than ever before. And our model diagrams made perfect sense to the business experts, who had often indicated that the diagrams were “too technical” for them. Even just sketching on a whiteboard, we could see that our most persistent rounding problems would be pulled out by the roots, allowing us to scrap some of the complicated rounding code.
Our new model worked well. Really, really well.
And we all felt sick!
A Sobering Decision
You might reasonably assume that we would have been elated at this point. We were not. We were under a severe deadline; the project was already dangerously behind schedule. Our dominant emotion was fear.
The gospel of refactoring is that you always go in small steps, always keeping everything working. But to refactor our code to this new model would require changing a lot of supporting code, and there would be few, if any, stable stopping points in between. We could see some small improvements we could make, but none that would take us closer to the new concept. We could see a sequence of small steps to get there, but parts of the application would be disabled along the way. And this was before the age when automated tests were widely used on such projects. We had none, so there was bound to be unforeseen breakage.
And it was going to take effort. We were already exhausted from months of pushing.
At this point, we had a meeting with our project manager that I will never forget. Our manager was an intelligent and bold man. He asked a series of questions:
Q: How long would it take to get back to current functionality with the new design?
A: About three weeks.
Q: Could we solve the problems without it?
A: Probably. But no way to be sure.
Q: Would we be
able to move forward in the next release if we didn’t do it now?
A: Forward movement would be slow without the change. And the change would be much harder once we had an installed base.
Q: Did we think it was the right thing to do?
A: We knew the political situation was unstable, so we’d cope if we had to. And we were tired. But, yes, it was a simpler solution that fit the business much better. In the long run it was lower risk.
He gave us the go-ahead and told us he would handle the heat. I’ve always had tremendous admiration for the courage and trust it took for him to make that decision.
We busted our butts and got it done in three weeks. It was a big job, but it went surprisingly smoothly.
The Payoff
The mystifyingly unexpected requirement changes stopped. The rounding logic, though never exactly simple, stabilized and made sense. We delivered version one and the way was clear to version two. My nervous breakdown was narrowly averted.
As version two evolved, this Share Pie became the unifying theme of the whole application. Technical people and business experts used it to discuss the system. Marketing people used it to explain the features to prospective customers. Those prospects and customers immediately grasped it and used it to discuss features. It truly became part of the UBIQUITOUS LANGUAGE because it got to the heart of what loan syndication is about.