2015-04-26

Advancing Enterprise DDD - Reinstating the Aggregate


Over the past four essays of the Advancing Enterprise DDD series, we have taken a long, hard look at crafting entity aggregates using JPA, and the challenges that arise. We've seen how to choose our cascades and fetch strategies to match aggregate boundaries as best as possible. And we've learned how to design our entity classes defensively against problems with proxy objects. In this essay, we'll investigate further ways to design aggregates well with JPA. There are two major goals we have here:
  1. Prevent the poor design and implementation - or lack of it! - of entity aggregates in our domain
  2. Help developers think in terms of Domain Driven Design concepts such as aggregates
In an ideal world, the tools we use would do these things for us. In the least, it could help make these things easy for us. But if the off-the-shelf tools don't support what we are trying to do, we can attempt to do it ourselves in our project infrastructure. We can do this by providing a standard set of interfaces and abstract classes to be used throughout the project. And we can write unit tests that enforce constraints that we have difficulty enforcing at the compiler level.

We could use documentation and local conventions and rules to fill some of the gaps. But if any two of the three of the following hold, then I can guarantee you that standards and conventions will not be enough:
  1. There are some junior and mid-level developers on our team
  2. You don't always do rigorous code reviews for everything that gets committed
  3. You sometimes face pressure to produce features on a tight schedule
Here are some specific difficulties, as of yet unaddressed, that we have seen with doing entity aggregates with JPA:
  1. Nothing differentiates an aggregate root from a non-root entity
  2. Nothing prevents a repository class for a non-root entity
  3. Nothing prevents an entity from associating with a non-root entity from another aggregate
  4. Nothing prevents an entity composing with an entity from another aggregate
  5. Nothing differentiates between association and compositions
Entities are typically recognized by the JPA @Entity annotation appearing on the class. On some projects, we'll also define an interface or abstract class that all entities inherit from. We could just as easily make a separate marker interface for aggregates as well. For example:

interface Entity {
}

interface RootEntity extends Entity {
}

We can write a unit test that asserts this by reflectively scanning your project's classes, looking for anything with an @Entity annotation that does not inherit from one of these interfaces.

Now, let's assume we are using Spring Data JPA to implement some basic CRUD repositories. We could provide our own repository interface that enforces the constraint that only aggregate roots have repositories. If our application was called Storefront, we might do it like so:

import org.springframework.data.repository.CrudRepository;

interface StorefrontRepository<R extends RootEntity>
extends CrudRepository<R, Long> {
}

The compiler will not stop a developer from bypassing this interface altogether, but we can write a unit test to make sure it is used. We might scan the project for classes marked with Spring's @Repository annotation, and fail if that class does not implement StorefrontRepository.

That's a good start, but what about distinguishing between associative and compositive relationships, and restricting these relationships to respect aggregate boundaries?

Vaughn Vernon provides a nice solution, both in his book Implementing Domain-Driven Design (see page 359 "Rule: Reference Other Aggregates by Identity"), as well as in the essay Effective Aggregate Design found on his website. In essence, he limits the use of JPA relationship mappings, such as @ManyToOne, @OneToMany, etc., to within a single aggregate. This is accomplished by forcing an explicit repository retrieval when navigating associations between entity aggregates.

Vernon accomplishes this by providing identity value object for each of the aggregate roots. Using the example we developed in The Entity and the Aggregate Root, our Customer would have a CustomerId, implemented something like this:

@Embeddable
public class CustomerId {

    @Column(name = "customer_id")
    private Long customerId;

    public CustomerId(Long customerId) {
        this.customerId = customerId;
    }

    public Long getCustomerId() {
        return customerId;
    }
}

Our association from Order to Customer is represented by the Order knowing the identity of the Customer:

@Entity
public class Order extends RootEntity {

    @Embedded
    private CustomerId customerId;

    // ...
}

A service that needs to navigate from the Order to the Customer would then do so with the help of the CustomerRepository:

Customer customer =
    customerRepository.retrieve(order.getCustomerId());

This is a very valuable technique for making it clear in the minds of developers that they are crossing an aggregate boundary, and will help them organize their code with that in mind. While we have seen in Cascades and Fetch Strategies and Overeager Fetch, we cannot always configure our cascades and fetches to fully cover an aggregate. But with this approach, we can at least assure that cascades and fetches do not leak across aggregate boundaries.

At this point I would humbly suggest that the language of identity above is not ideal. It does make it clear that it is a wrapper for a database ID, but is this really what we want? We are using these value objects within our domain classes, so why not name them in the terms of domain modeling, rather than using database terminology? While we rename CustomerId from above to CustomerAssociation, let's flesh out the example a little further. First off, we can create an interface for the associations to implement:

interface Association<R extends RootEntity> {
    Long getId();
}

Our Customer association would now look something like this:

@Embeddable
public class CustomerAssociation implements Association<Customer> {

    @Column(name = "customer_id")
    private Long customerId;

    public CustomerAssociation(Long customerId) {
        this.customerId = customerId;
    }

    @Override
    public Long getId() {
        return getCustomerId();
    }
}

To keep the association metaphor going, let's add a findByAssociation method to our StorefrontRepository:

public abstract class StorefrontRepository<R extends RootEntity>
implements CrudRepository<R, Long> {

    public R findByAssociation(Association<R> assoc) {
        return findOne(assoc.getId());
    }
}

Now our services can follow the association like so:

Customer customer = customerRepository.findByAssociation(
    order.getCustomerAssocation());

This all looks great, but what's to prevent a developer from creating a direct relationship to another aggregate root? The construction below remains entirely legal:

@Entity
public class Order extends RootEntity {

    @ManyToOne(fetch = FetchType.LAZY, cascade = {})
    private Customer customer;

    // ...
}

There's no way we can turn this into a compiler error, so we resort to the next best thing: unit test. Our unit tests run on quite a regular basis, so even if the compiler allows the above example, the mistake will not go unnoticed for long. Here is some pseudocode for a test that will do the job:
  • Use classpath scanning to iterate over all the entity classes in the domain.
    • Use reflection to iterate over all the fields tagged with annotations such as @OneToMany.
      • If the type of the field is a sub-type of RootEntity, then fail.
There's still one question that we haven't answered yet: How do we prevent an entity from forming a compositional relationship with an entity from another aggregate? At the moment there is nothing preventing us from doing something like this:

@Entity
public class Customer extends RootEntity {

    @ManyToOne(fetch = FetchType.LAZY, cascade = {})
    private OrderItem orderItem;

    // ...
}

To prevent things like this, we can make use of F-bounded polymorphism, which is a common enough technique in Scala, but might seem a little strange to a Java programmer. The first thing we do is to modify our Entity interface to specify the root of the entity aggregate as a type parameter:

public interface Entity<R extends RootEntity<R>> {
}

This causes ripples throughout the code developed so far, starting with RootEntity. This is where the F-bounded polymorphism comes in:

public interface RootEntity<R extends RootEntity<R>>
extends Entity<R> {
}

The outcome of this use of type parameters is to force a RootEntity sub-type to specify itself as R. In essence, it prevents us from saying that a Order belongs to the aggregate that has an Customer as root. Trying to do this will give a compiler error about a type bounds mismatch:

// does not compile!
public class Order implements RootEntity<Customer> {
}

We need to modify all our entity classes to specify their root:

public class Customer implements RootEntity<Customer> {
}

public class Order implements RootEntity<Order> {
}

public class OrderItem implements Entity<Order> {
}

We also need to adjust our type signatures for associations and repositories:

interface Association<R extends RootEntity<R>> {
}

public abstract class 
StorefrontRepository<R extends RootEntity<R>>
implements CrudRepository<R, Long> {
}

Now we can write a unit test to prevent the example above, where Customer established a @ManyToOne with an Order Item. We simply scan our entity classes for fields annotated with JPA @ManyToOne, @OneToOne, etc., and then use reflection to assure that the root of the entity class is the same as the root of the referenced entity.

As we've seen today, reflective, classpath scanning unit tests can be a great way to enforce project constraints that are difficult or impossible to enforce at the compiler level. We've also developed a great little framework for enforcing the basic constraints of DDD aggregation in a Java EE application using JPA. This is quite a bit of local infrastructure to develop for the sake of assuring that our entity aggregates are well formed. You could make use of some or all of it to assist your team in designing aggregates well. How far you choose to go will depend on the needs of your team.

If, however, you are in the planning stages of a new project, I would urge you to consider alternatives to JPA and relational database technologies that might be better suited to doing Domain Driven Design. In the next essay, we will investigate how much easier implementing aggregates might be if we switched to MongoDB, or some other document database.

2 comments:

  1. Hello Jhon!

    Thank you! Of all the tons of documents/samples/tips and everything else you can find online, this is the first time that I read (and understand at the first sight) how to use JPA/Hibernate in the context of DDD.

    I've read the last 6 (or more) posts one after the other and at every step my doubts of doing DDD with JPA are fading away.

    I would suggest you to write a book!

    Thank you!
    Luca

    ReplyDelete
    Replies
    1. Hi Lucas! Thanks so much for your comments. I'm really, really glad that you find this series so useful. I considered putting the material into a book format, but honestly, all my best material is available here for free now! ;-) Also, as soon as I finished this series, I switched my focus to the longevity project, which is sort of a JPA replacement but for Scala and NoSQL. The project is going great! Take a look for yourself:

      http://longevityframework.github.io/longevity/

      Thanks again Lucas!

      Best, John

      Delete