Examples of Best Practices

   Dave Edwards

In pursuit of an easy-to-use, easy-to-maintain system landscape...

 

No matter the size of your company or number of end users, having a standard set of best practices will always prove valuable. I have implemented over 15 models in the past four years across multiple companies, and along the way I discovered the importance of consistency and strategically designed model architecture.

I will share with you three recommendations for improving your model design. These are quick changes, likely only one or two weeks to implement, that will speed up your models and make managing your architecture simpler.

My quick best practices:

  • Rename your imports to clearly show how data flows from list/module to another list/module.
  • Choose speed over size to keep your users and model builders happy.
  • Other Anaplan experts won’t go this far, but be literal with DISCO to simplify drilling into a calculation flow.

Smart Import Naming

This will take you the better part of a day to implement. When you create an import action in Anaplan, the system uses a default labeling method that isn’t the most intuitive nor concise. As a solution, Anaplan’s support staff will tell you to name an import based on what the action is doing. So if you are importing from a Data Hub cost center module into a spoke model module, name that action something like “1. Import Cost Center Properties.”

I do not necessarily like that approach. By renaming an action to define what is happening versus where data is coming and going, you lose visibility into the action’s mapping should you need to track down bug fixes or outline source data to an end user who may be questioning the validity of a number they are seeing on the screen.

For this example, use this naming convention instead:

mod_S_CC Hierarchy // DH_svd_S_CC Hierarchy.Import

What does this naming convention tell you?

Let’s break it down.

First, here is how you should read this action:

“The S_CC Hierarchy module is being updated by a saved view from the S_CC Hierarchy module in the Data Hub.”

Take a look at the tables below to see how I can gather that information just from my smart labeling method:

Import Target

Import Source

Now I can add this import to a process called “Import Cost Center Properties” along with any other related actions, which will provide me an obvious location for updating my cost center hierarchy.

This also retains visible import mappings that can quickly trace back to both source and target.

Model Speed vs. Model Size

While some of us are fortunate enough to work with large IT budgets and, therefore, large Anaplan workspaces, most of us will routinely need to choose between a model that fits within our GB limitations and one that provides minimal calculation times for end users. The constant risk as a model builder is designing a solution that appears efficient based solely on its GB size and not calculation speed.

As a point of reference for any model builders reading this, my most recent build was a holistic financial planning model that came in at just over 100GB. The slowest calculation time across any of the user input locations? 1.1 seconds.

This model included CapEx projections, workforce data aggregation from a separate model, zero-base budgeting, and final expense allocations for an organization with over 10,000 cost centers. End users never had to wait longer than 1.1 seconds for an entry to calculate throughout the model.

However, this model could have easily been 80-90GB by combining certain line items into long strings of conditional “IF” statements had our design goal been to save on workspace size instead of promoting calculation speed. The only problem with this scenario is that the condensed version of the model resulted in calculation times of over 15 seconds for the same inputs.

It is easy to assume that a smaller model must mean a faster model, but Anaplan works best when calculations are broken out over multiple line items (and even modules in some cases).

A properly designed 100GB model will perform faster than a poorly designed 100MB model!

This strategy promotes segmented calculation chains that thrive within Anaplan’s Hyperblock core and also reduces the volume of user complaints from severe model lag time!

Have an idea to contribute?

A Literal Approach to DISCO

For anyone who is unfamiliar with DISCO, this is an acronym for Anaplan’s prescribed best practice for module architecture:

D             ⇨             Data

I               ⇨             Input

S             ⇨             System

C             ⇨             Calculation

O             ⇨             Output

Each module has a purpose, whether it is data storage, user input, dimension properties or mappings, background calculations, or visible outputs on dashboards. You can read more about DISCO here on Anaplan’s Community page.

What sets me apart from Anaplan’s support staff is my perspective on labeling modules within a DISCO framework. Anaplan suggests that modules be named based on the purpose they achieve, and DISCO is merely a method for segregating module functions.

When I design a model, I go one step further and label each module with a “D”, “I”, “S”, “C”, or “O” so I can quickly trace back my calculations or imports when working on a user request or within a dashboard. Imagine a model that is so simply organized that you can quickly view a data flow and track potential bugs in calculation design. You can even use smart headers to break out functional areas within the DISCO groupings!

So try these suggestions out! Like I already said, these can be very quick to implement and make navigating your models much easier. Your team will thank you, and your end users will really thank you.

Thoughts or questions? Send me a note! I’d love to hear how these best practices are working in your models, or if you want my input on a project design.

Thanks for reading!