Discontinuing the Introducing Emblem Series

It's been a while since my last post in the Introducing Emblem series, and having had some time to mull it over, I've decided to discontinue the series. It seems silly to be writing what is essentially user documentation as a series of blog posts, when I could be writing user documentation in a more suitable place, such as the wiki for the project's GitHub page.

I've migrated the useful content from the previous blog posts into the wiki. I've also added two new wiki pages there, which would otherwise have made up this blog post. In the new pages, I cover TypeBoundMaps, and how to use emblem's maps with types that do not have exactly one type parameter.

I'll put out a note on Twitter or wherever when I add significant new content to the wiki in the future.


Codename: Life Happens

I don't have to actually look over at GitHub to know what it says. It says the estimate for MMP is mid September, 2015. And I don't need to go to my board to know how much work I have left for an MMP release. And I don't really want to look up at the date in the top right corner of my screen.

And I don't expect that mid-September language to stay there long! When I get around to it I'm taking out all language about estimated dates. I've already pushed back dates in the README once. By this point, it's more of an "It'll be ready when it's ready" kind of thing.

So what happened? Why did I miss my date? As often happens, I underestimated the work. But that wasn't the half of it. I mean, it's really a lot of work. Writing a good library API is tricky, and this library is trying to do some big things. But I can do the work. It's just a matter of time and focus.

I've spent a great deal of energy this year moving my family from Boston to Minneapolis. And I'm quite sure I underestimated the amount of work involved there too! It'll take some time to settle in, but I can truly say that the move is done, and that's a relief. I love the Twin Cities. It's beautiful out here, and people are so much more friendly and relaxed than they are in Boston.

And that's really the main reason why I missed my date. That move was a bear, and I'm still recovering.


Pitching Scala

I find myself in a position of pitching Scala to a couple of players tomorrow, and I thought I would organize my thoughts on what I might pitch to them in bullet points. These are IT generalists having a good familiarity with Java and the JVM, but not terribly familiar with Scala. I decided to arrange my bullets in the following manner, going from most important to least important:

  1. Headline pitch. A one-liner that both frames what Scala is, and is provocative enough to capture the listener's attention.
  2. What is Scala? A short list of bullet points that both gives a brief summary of what Scala is, at the same time promoting some of the core benefits of adopting and using Scala.
  3. What else makes Scala so awesome? More in depth points demonstrating the power and promise of Scala. Valuable topics of discussion, but things that I won't mind too much if the conversation wanders and I don't get around to mention.
Here's my pitch list. What do you think? Please let me know in the comments!

Headline Pitch:
  • Scala is a natural successor to the Java programming language, and is certain to be a key player in the Java ecosystem for years to come.
What is Scala?
  • Scala is a function/object-oriented hybrid. It is easy to program in Scala with a very Java like style, and to adopt the best features of functional programming at your own pace.
  • Scala/Java interop is seamless, allowing you to painlessly integrate with the thousands of Java libraries available on the JVM, as well as with your own legacy Java code.
  • Like other next generation languages that run on the JVM, you can code in Scala with massively less boilerplate than in Java.
  • Unlike other next gen JVM languages, Scala is strongly typed, improving code quality and coding productivity on large projects and in a team environment. Scala’s type system is actually quite a bit more expressive than Java’s, providing even more compile-time safety to the development team.
What else makes Scala so awesome?
  • A wide variety of Scala libraries and frameworks that are constantly becoming more robust and fully featured.
  • Many of these, such as Akka, Play, and Finagle, provide both Java and Scala APIs, making business-wide transitions from Java to Scala even easier.
  • The Scala community enthusiastically embraces reactive principles, paving the way for more robust, resilient, and scalable systems. This is especially notable in three areas:
    • Scala has powerful asynchronous tools such as Futures and Promises built right in to the language library.
    • ReactiveX styled asynchronous streams in RxScala.
    • Akka is unquestionably the premier actor framework running on the JVM.
  • Spark, written in Scala with APIs in Scala, Java, Python, and R, is rapidly becoming the go-to big data processing and analysis tool.
  • The Scala community is full of friendly and enthusiastic people!

Here are some of the sources I drew on to put this list together:
  • http://www.scala-lang.org/what-is-scala.html
  • https://twitter.github.io/finagle/
  • http://www.reactivemanifesto.org/
  • http://www.reactive-streams.org/announce-1.0.0
  • https://spark.apache.org/docs/latest/
  • http://www.typesafe.com/resources/video/reactive-streams-1-0-0-and-why-you-should-care


Introducing Emblem - TypeKeyMaps

This is the third post in the Introducing Emblem series, where I discuss a reflection-based utility library I have developed called emblem. If you want to try emblem out on your project, follow these instructions. Or read this if you just want to explore try it out in the Scala REPL.

In the previous post, we looked at the core building block of the emblem library: the TypeKey. In this post, we begin to explore the utility of the TypeKey using TypeKeyMaps.


Advancing Enterprise DDD - Acknowledgements

Many of the ideas I discuss in this series originated in discussions many years back with two good friends excellent software engineers, Andrew Tolopko and Armen Yampolsky. I am grateful to Armen and Andrew for sparking many different avenues of critical thought regarding JPA, Hibernate, and Domain Driven Design. I would also like to thank Andrew for introducing me to Domain Driven Design in the first place, back in 2005 or so.

Thanks goes to Eric Evans for initially developing the concepts of Domain Driven Design, and presenting them in a book by that same name. This is really a wonderful book that has profoundly changed my way of thinking about software engineering like few other resources have. It was a bit of a challenging book for me on first read, mainly because the ideas he presents were so counter to the ways I typically thought about software engineering. Like many of the best programming books I've read, I didn't really grasp everything the first time through, and I read it a second time in short order. I've read this book three times now, and often re-read sections of it when working on particular problems or challenges. While this book is a little difficult to get through, I highly recommend you give it a read, and keep a copy on your shelf. It's worth the investment.

I would also like to thank Vaughn Vernon for his excellent book, Implementing Domain Driven Design, and for advancing the field of DDD into areas such as CQRS and event-driven architecture. His book provides specific programming and architectural patterns for applying DDD using a variety of technologies, including JPA, and has influenced my thinking here a good deal.

Thanks to the developers of Java, Spring, and Hibernate, for providing a great back-end developer stack that gave me so many employment opportunities over the years. In hindsight, it is easy to focus on all the limitations of this stack, and to overlook it's power and all the ways that it was so innovative for its time. Thanks to Scala and MongoDB for helping the software world take that next step forward.

Thanks to StarUML for helping me produce all the pretty UML diagrams found in this series. Also, thanks to Blogger for providing me with a platform to write and share my work.

Finally, I'd like to thank my family, Josie, Jamey, and especially Shu, for being so supportive of me while I worked through this series. They've been very gracious and patient as I've stolen moments from here or there, and disappeared for brief spells on Sunday nights to post.


Advancing Enterprise DDD - Moving Forward

In the last essay of the Advancing Enterprise DDD series, we wrapped up our discussion on immutability, as well as the technical discussion overall. In this essay, we wrap up the series by reflecting on what we've learned, and discussing how to apply these lessons as we move forward.


Advancing Enterprise DDD - Entities, Value Objects, and Identity

In the previous essay of the Advancing Enterprise DDD series, we saw how we might improve our Domain Driven Design model by making our entity classes immutable. Here, we continue to investigate immutable entities by seeing how this affects one of the core DDD building blocks: the value object.


Advancing Enterprise DDD - Migrating to Immutability

In the previous essay of the Advancing Enterprise DDD series, we saw difficulties emerge in maintaining intra-aggregate constraints. These difficulties arose due to the mutability of our entities, and the Java collections they use to maintain relationships between entities. We also saw that it would be difficult or impossible to use immutable alternatives within JPA. In this essay, we set aside the constraints of JPA for a moment, and imagine a world where entities are composed of immutable objects.


Advancing Enterprise DDD - Maintaining Intra-aggregate Constraints

In this essay of the Advancing Enterprise DDD series, we begin look to at immutable objects - a common point of contrast between functional and object-oriented programming approaches. In the last essay, we saw at how MongoDB would be a much better vehicle than RDB for persisting well-shaped DDD aggregates. In this essay, we continue to investigate at the way the tools we use affect our thinking and coding, as we witness the contortions we have to go through to make something as straightforward as an intra-aggregate constraint work in the face of mutability.


Down Time

Some things have come up in my personal life that I need to attend to. I'm not going to stop writing and coding entirely, but it's going to have to take a lower priority for the next few weeks while I take care of things. I'm going to finish the Reactive Scala course, and I'll be able to complete the Advancing Enterprise DDD series on schedule. But the emblem documentation will be delayed, and my release dates for longevity are going to have to get pushed back.

Thanks for your patience, and thanks so much for your readership! I love to write. It really helps me formulate my thoughts and work them through. But it's especially fulfilling to have people reading what my material.


Advancing Enterprise DDD - Documents as Aggregates

In the last few essays of the Advancing Enterprise DDD series, we've taken a look at entity aggregates, and the challenges we face in implementing them well. We've seen a great many techniques we can employ using JPA to mitigate these challenges. In this essay, we step out of the JPA and relational database mindset, and consider modeling aggregates with a document database such as MongoDB.


Introducing Emblem - TypeKeys

This is the second post in the Introducing Emblem series, where I discuss a reflection-based utility library I have developed called emblem. The first post presented a high-level overview of the library. In this post, we look at the core building block of the emblem library: the TypeKey.


Introducing Emblem - A Reflection-based Utility Library for Scala

I've been working on this Scala project for a while now, and I found myself needing to develop some reflection-based tools to do some of the things I wanted to do. These things definitely weren't part of my core code, and I thought other people might find them useful as well, so I split them off into a separate library called emblem. It's open-source and on GitHub. The code is clean and has good API docs, but I haven't gotten around to writing any user documentation yet. I plan to do that here, and I'd like to start by describing my basic goals with emblem, and the design principles I decided to follow.


Advancing Enterprise DDD - Reinstating the Aggregate

Over the past four essays of the Advancing Enterprise DDD series, we have taken a long, hard look at crafting entity aggregates using JPA, and the challenges that arise. We've seen how to choose our cascades and fetch strategies to match aggregate boundaries as best as possible. And we've learned how to design our entity classes defensively against problems with proxy objects. In this essay, we'll investigate further ways to design aggregates well with JPA. There are two major goals we have here:
  1. Prevent the poor design and implementation - or lack of it! - of entity aggregates in our domain
  2. Help developers think in terms of Domain Driven Design concepts such as aggregates
In an ideal world, the tools we use would do these things for us. In the least, it could help make these things easy for us. But if the off-the-shelf tools don't support what we are trying to do, we can attempt to do it ourselves in our project infrastructure. We can do this by providing a standard set of interfaces and abstract classes to be used throughout the project. And we can write unit tests that enforce constraints that we have difficulty enforcing at the compiler level.

We could use documentation and local conventions and rules to fill some of the gaps. But if any two of the three of the following hold, then I can guarantee you that standards and conventions will not be enough:
  1. There are some junior and mid-level developers on our team
  2. You don't always do rigorous code reviews for everything that gets committed
  3. You sometimes face pressure to produce features on a tight schedule
Here are some specific difficulties, as of yet unaddressed, that we have seen with doing entity aggregates with JPA:
  1. Nothing differentiates an aggregate root from a non-root entity
  2. Nothing prevents a repository class for a non-root entity
  3. Nothing prevents an entity from associating with a non-root entity from another aggregate
  4. Nothing prevents an entity composing with an entity from another aggregate
  5. Nothing differentiates between association and compositions
Entities are typically recognized by the JPA @Entity annotation appearing on the class. On some projects, we'll also define an interface or abstract class that all entities inherit from. We could just as easily make a separate marker interface for aggregates as well. For example:

interface Entity {

interface RootEntity extends Entity {

We can write a unit test that asserts this by reflectively scanning your project's classes, looking for anything with an @Entity annotation that does not inherit from one of these interfaces.

Now, let's assume we are using Spring Data JPA to implement some basic CRUD repositories. We could provide our own repository interface that enforces the constraint that only aggregate roots have repositories. If our application was called Storefront, we might do it like so:

import org.springframework.data.repository.CrudRepository;

interface StorefrontRepository<R extends RootEntity>
extends CrudRepository<R, Long> {

The compiler will not stop a developer from bypassing this interface altogether, but we can write a unit test to make sure it is used. We might scan the project for classes marked with Spring's @Repository annotation, and fail if that class does not implement StorefrontRepository.

That's a good start, but what about distinguishing between associative and compositive relationships, and restricting these relationships to respect aggregate boundaries?

Vaughn Vernon provides a nice solution, both in his book Implementing Domain-Driven Design (see page 359 "Rule: Reference Other Aggregates by Identity"), as well as in the essay Effective Aggregate Design found on his website. In essence, he limits the use of JPA relationship mappings, such as @ManyToOne, @OneToMany, etc., to within a single aggregate. This is accomplished by forcing an explicit repository retrieval when navigating associations between entity aggregates.

Vernon accomplishes this by providing identity value object for each of the aggregate roots. Using the example we developed in The Entity and the Aggregate Root, our Customer would have a CustomerId, implemented something like this:

public class CustomerId {

    @Column(name = "customer_id")
    private Long customerId;

    public CustomerId(Long customerId) {
        this.customerId = customerId;

    public Long getCustomerId() {
        return customerId;

Our association from Order to Customer is represented by the Order knowing the identity of the Customer:

public class Order extends RootEntity {

    private CustomerId customerId;

    // ...

A service that needs to navigate from the Order to the Customer would then do so with the help of the CustomerRepository:

Customer customer =

This is a very valuable technique for making it clear in the minds of developers that they are crossing an aggregate boundary, and will help them organize their code with that in mind. While we have seen in Cascades and Fetch Strategies and Overeager Fetch, we cannot always configure our cascades and fetches to fully cover an aggregate. But with this approach, we can at least assure that cascades and fetches do not leak across aggregate boundaries.

At this point I would humbly suggest that the language of identity above is not ideal. It does make it clear that it is a wrapper for a database ID, but is this really what we want? We are using these value objects within our domain classes, so why not name them in the terms of domain modeling, rather than using database terminology? While we rename CustomerId from above to CustomerAssociation, let's flesh out the example a little further. First off, we can create an interface for the associations to implement:

interface Association<R extends RootEntity> {
    Long getId();

Our Customer association would now look something like this:

public class CustomerAssociation implements Association<Customer> {

    @Column(name = "customer_id")
    private Long customerId;

    public CustomerAssociation(Long customerId) {
        this.customerId = customerId;

    public Long getId() {
        return getCustomerId();

To keep the association metaphor going, let's add a findByAssociation method to our StorefrontRepository:

public abstract class StorefrontRepository<R extends RootEntity>
implements CrudRepository<R, Long> {

    public R findByAssociation(Association<R> assoc) {
        return findOne(assoc.getId());

Now our services can follow the association like so:

Customer customer = customerRepository.findByAssociation(

This all looks great, but what's to prevent a developer from creating a direct relationship to another aggregate root? The construction below remains entirely legal:

public class Order extends RootEntity {

    @ManyToOne(fetch = FetchType.LAZY, cascade = {})
    private Customer customer;

    // ...

There's no way we can turn this into a compiler error, so we resort to the next best thing: unit test. Our unit tests run on quite a regular basis, so even if the compiler allows the above example, the mistake will not go unnoticed for long. Here is some pseudocode for a test that will do the job:
  • Use classpath scanning to iterate over all the entity classes in the domain.
    • Use reflection to iterate over all the fields tagged with annotations such as @OneToMany.
      • If the type of the field is a sub-type of RootEntity, then fail.
There's still one question that we haven't answered yet: How do we prevent an entity from forming a compositional relationship with an entity from another aggregate? At the moment there is nothing preventing us from doing something like this:

public class Customer extends RootEntity {

    @ManyToOne(fetch = FetchType.LAZY, cascade = {})
    private OrderItem orderItem;

    // ...

To prevent things like this, we can make use of F-bounded polymorphism, which is a common enough technique in Scala, but might seem a little strange to a Java programmer. The first thing we do is to modify our Entity interface to specify the root of the entity aggregate as a type parameter:

public interface Entity<R extends RootEntity<R>> {

This causes ripples throughout the code developed so far, starting with RootEntity. This is where the F-bounded polymorphism comes in:

public interface RootEntity<R extends RootEntity<R>>
extends Entity<R> {

The outcome of this use of type parameters is to force a RootEntity sub-type to specify itself as R. In essence, it prevents us from saying that a Order belongs to the aggregate that has an Customer as root. Trying to do this will give a compiler error about a type bounds mismatch:

// does not compile!
public class Order implements RootEntity<Customer> {

We need to modify all our entity classes to specify their root:

public class Customer implements RootEntity<Customer> {

public class Order implements RootEntity<Order> {

public class OrderItem implements Entity<Order> {

We also need to adjust our type signatures for associations and repositories:

interface Association<R extends RootEntity<R>> {

public abstract class 
StorefrontRepository<R extends RootEntity<R>>
implements CrudRepository<R, Long> {

Now we can write a unit test to prevent the example above, where Customer established a @ManyToOne with an Order Item. We simply scan our entity classes for fields annotated with JPA @ManyToOne, @OneToOne, etc., and then use reflection to assure that the root of the entity class is the same as the root of the referenced entity.

As we've seen today, reflective, classpath scanning unit tests can be a great way to enforce project constraints that are difficult or impossible to enforce at the compiler level. We've also developed a great little framework for enforcing the basic constraints of DDD aggregation in a Java EE application using JPA. This is quite a bit of local infrastructure to develop for the sake of assuring that our entity aggregates are well formed. You could make use of some or all of it to assist your team in designing aggregates well. How far you choose to go will depend on the needs of your team.

If, however, you are in the planning stages of a new project, I would urge you to consider alternatives to JPA and relational database technologies that might be better suited to doing Domain Driven Design. In the next essay, we will investigate how much easier implementing aggregates might be if we switched to MongoDB, or some other document database.


Advancing Enterprise DDD - Problems with Proxies

In the previous two essays of the Advancing Enterprise DDD series, (Cascades and Fetch Strategies, and Overeager Fetch), we've seen how JPA makes use of proxy objects to represent entities that have not yet been loaded from the database. In this essay, we will take a look at some of the problems raised by the use of proxy domain objects in JPA, and discuss what we can do to avoid them. We'll look at LazyInitializationExceptions, and problems arising from hidden network I/O in general. And we'll look at accidental access to the zeroes and nulls in a proxy object, and unexpected behavior when type-checking our domain object.


Advancing Enterprise DDD - Overeager Fetch

In the previous essay in the Advancing Enterprise DDD series, we investigated how to tailor our cascades and fetch strategies to align with aggregate boundaries. In this essay, we continue to look at fetch strategies in situations where the shape of our aggregates are more complex.


Advancing Enterprise DDD - Cascades and Fetch Strategies

In the last essay of the Advancing Enterprise DDD series, we learned about entity aggregates, and how to construct them while doing domain modeling. In this essay, we begin to look at some specifics of how to implement aggregates using JPA. Specifically, we want to configure our cascades and fetch strategies so that persistence operations align with aggregate boundaries.


Advancing Enterprise DDD - The Entity and the Aggregate Root

In this essay of the Advancing Enterprise DDD series, we will leave behind the POJO for a bit, and look at entity aggregates. After the entity, the aggregate is probably the most important building block in Domain Driven Design. Aggregates provide a way to organize your entities into small functional groups, bringing structure to what might otherwise be a knotted jumble of entity classes.


Scala EE Discussion Group

I feel like Scala is poised to become the successor to Java on the JVM. But migrating from large-team Java and Java EE presents many challenges. For instance, what to do to assure a comfortable level of stylistic uniformity throughout the codebase? What if your senior engineers want to use, say, scalaz, and you're worried that the mid- and junior-level developers on your team won't be able to understand it? Just which technologies should you choose, and how do you know they are robust and stable? I've created a Google+ group for Java EE expats (and their loved ones) to discuss issues like this. Please stop by and share your thoughts.



Advancing Enterprise DDD - Rethinking the POJO

In the previous essay of the Advancing Enterprise DDD series, we looked at a sample POJO with a variety of un-encapsulated persistence concerns. Unfortunately, this exposure occurs right at the heart of our software system: the domain model. Because the domain classes are used throughout the service layer, and are sometimes mirrored through the application layer up to the user interface, our whole codebase is susceptible to misuse of these data. Furthermore, having these data in the domain classes themselves inevitably clouds our thinking when trying to work in terms of the domain and the ubiquitous language. 


Advancing Enterprise DDD - The POJO Myth

This is the third essay in my Advancing Enterprise DDD series, where we discuss doing Domain Driven Design with the standard enterprise Java toolset: Java, Spring, Java Persistence API (JPA), and a relational database (RDB).

In the last essay, we took a higher level view of the design principles of JPA. In this essay, we examine how the JPA-annotated plain-old Java object (POJO) fails to separate persistence-level concerns from our domain classes.


Advancing Enterprise DDD - What Makes It So Hard?

This is the second essay in my Advancing Enterprise DDD series, where we discuss doing Domain Driven Design with the standard enterprise Java toolset: Java, Spring, Java Persistence API (JPA), and a relational database (RDB).

In the previous essay, we reflected on the way the tools we use can affect the way we reason about our domain. In this essay, we look at the high-level design principles of JPA, and how well they align with the principles of DDD.


Advancing Enterprise DDD - Working with the Tools We Have

We’ve done our best to do Domain Driven Design with the standard enterprise Java toolset: Java, Spring, Java Persistence API (JPA), and a relational database (RDB). While this has been a very successful stack for us in developing enterprise applications, we’ve encountered problems doing DDD well in this environment. In this series of essays, we will look at some of the difficulties we’ve had, and consider ways to improve our situation. We provide advice for doing DDD with the given toolset. And we look ahead to how we might do DDD better with different tooling - specifically, replacing Java with Scala, and RDB with a document database such as MongoDB.


Introduce Type Param Pattern

I've always had the feeling that many of the common design patterns were simply recipes for doing something that your programming language does not do for you. The standard Gang of Four design patterns apply to many different programming languages, because these are things that most object oriented programming languages, at least at the time, weren't prepared to help you with. Just which design patterns are relevant is something that changes over time, as languages evolve to do more for you. A classic example is the singleton pattern and Scala. Scala provides singleton objects as a language feature, and most of the implementation concerns addressed by the singleton pattern are now handled by the language.

On the other hand, as languages get more powerful, we expect more from them, and are naturally inclined to attempt to do more sophisticated things than we used to. I often get myself in trouble with Scala, because I try to do things I would never attempt to do in Java. In a position like this, we may need to consider new patterns to get around the newly discovered limitations of the new language. I "invented" a little Scala-specific pattern that I will show here, that I've used to help me overcome a small problem I come across from time to time.