Introducing Ark Analytic Solutions

As many of you know, I went to work for Tableau in 2018, and loved working there. I learned much about the great products they produce, and developed a significant amount of expertise around Tableau Server. But ultimately, it was just not the right fit for me. So last year, I decided to form my own consulting company. My first client kept me for six months, during which time I established Ark Analytic Solutions, LLC, a BI consultancy based around exceptional service and expertise.

When the lockdowns started, the client I was working with had to release all outside help, as their business had been severely impacted by the pandemic and subsequent lockdowns. So I’ve been sheltering in place at home while honing my skills, and developing my business plan.

The good news is that we are now open for business. We are, for now, offering remote assistance in BI, Data and BI Governance, Big Data analytics, and data movement and management. I have more than twenty years’ experience in BI, and have developed relationships with the best people in the business, and am bringing all that experience together to benefit our clients. We stand by, ready to help for a few days, or a few years, or anything in between.

Our logo is currently in development, and I’m looking for assistance in building a web site.

This adventure has been a long time coming, and I’m really excited to make this announcement. Feel free to contact me if you have any questions, or need any assistance with any project you may have. I promise to offer you the best service possible. In the meantime, stay safe out there.

Sum Prior Months into Current Month (Webi)

Yes, this was an actual request, and when it came in, I honestly wasn’t sure if it was possible. Open orders are allocated to the month in which they are scheduled to ship. But some orders, scheduled to ship in past months, for whatever reason, didn’t ship when scheduled. They are still open. So the client wanted the open order amounts for prior months to be summed into the current month, while the amounts for future months will stay as scheduled. Well, I hate to say no to any request, so I went to work on it, and, with the use of a flag variable, and calculation contexts, I was able to make it work.

Let’s imagine that the raw results from our query looks like the following, and the current month is January, 2020.

As you can see, there is quite a bit of data for the prior months. But I need all open_amounts from previous months to be brought forward into the current month. To begin with, I needed flag variable that would flag the months, as needed. Flag variables are very handy for many purposes, and most of the time, will return one of two values. However, in this case, I need a three way flag variable:

  • If the month is past, flag it with a 1.
  • If the month is the current month, flag it with a 0.
  • If the month is a future month, no flag is needed.

so, to start with, I created a dimension variable called Completed Months Flag. Here is the formula:

=If([invoice_month] < MonthNumberOfYear(CurrentDate()) And [invoice_year] <= Year(CurrentDate());1;


If([invoice_month] = MonthNumberOfYear(CurrentDate()) And [invoice_year] <= Year(CurrentDate());0)))

Let’s break this down. The first line will return a 1 for all previous months in the current year. So, this being January, this part will not be returning any results for any one row.

The second line will return a 1 for all previous months in prior years. In this case, it will return a 1 for the months of July – December, 2019.

The third line will return a 0 only for the current month, January, 2020.

When I add the flag variable to the table, I see the following results.

So now, I know which months need to be summed up (those months marked by a 1), and where they need to be summed up (the month with a 0). So now, I need to take advantage of calculation contexts, and create the measure variable, called Open Forward, that will sum the prior months into the current month. Here is the formula.

=Sum(If([Completed Months Flag]<>1;Sum([open_amount] In Block Where([Completed Months Flag]=1)))) Where([Completed Months Flag] = 0) + Sum([open_amount]) Where([Completed Months Flag] <> 1)

If you think this looks complicated, you’re right. So let’s walk slowly through this formula, part by part.

We start with an If function. If the flag is not equal to 1, that is, the current and future months, then sum the open_amount for the entire block, but only where the flag does equal 1, that is, all previous months. But only do this where the flag is equal to 0, that is, the current month. Let me restate that. Wherever there is a 1 for the flag variable, sum up all values in the block, and place that sum on the row where the flag is equal to 0. So all previous months values are summed into the current month. If we stopped there, we would get the prior months summed to the current month, but all future months would be left blank, and the open_amount for the current month would also be blank. So that leads us to the final part. This one simply adds the sum of the open_amount where the flag does not equal 1. So each month, from the current month going forward, simply displays the open_mount for that month. In the case of the current month, that amount is added to the sum of the previous months. With a little formatting, we end up with a table that looks like this:

Of course, you can remove the flag variable, and the open amount columns from the table, and you have the final results. Needless to say, I was pretty excited to actually make this work, and the client was quite pleasantly surprised. I also made a version of this in Tableau, and will have a post on that soon.

Tableau Security Made Easy

When I joined Tableau last year, I was found that the security model presented in training classes was a model that was counter to what I considered a best practice, based on nearly twenty years working in the Business Intelligence (BI) space. The model presented was what I would consider a legacy security model. It requires the creation of three groups for each Project on Tableau Server.

As you can see, this organization has five groups, each with their own Project in Tableau Server. For each Project, three groups are created: Viewers, Explorers, and Creators. Users of any given Project would be put into the appropriate group for that Project. The problem is that we end with too many groups to maintain. Imagine an organization that has fifty groups, each with their own Project. They would end up with 150 groups to maintain in Tableau. This is simply too many groups, and creates a maintenance nightmare. The same results can be accomplished with a fraction of the groups.

My experience is that it’s much easier to maintain a modern security model:

The idea here is that any user who is an Explorer or Creator would simply be added to the Explorers or Creators group, but not both. In the example above, Mary is a member of the Sales group, which grants her the rights to see the Sales Project. She is also a member of the Creators group, which grants her the right to create and edit workbooks. This dual membership accomplishes the same thing as the single group membership in the legacy security example, but results in far fewer groups to maintain.

My first thought was that, perhaps Tableau isn’t capable of this type of security model. So I built a Tableau server and tried it out. I was delighted to find that I could make it work, and it was pretty simple. And, as I started sharing this with people inside Tableau, it generated quite a bit of enthusiasm. So here’s how it is accomplished:

Step 1: Create your groups. You can create them in Tableau, or in your identity store (e.g. Active Directory). Create one group for each Project, as well as a group for Explorers, and a group for Creators.

Step 2: Move your users into the appropriate groups. In the example above, Dave has a Viewer license, so he only needs to be a member of the Operations group. Mary, on the other hand, is a Creator, so she is added to the Sales group, as well as the Creators group.

Step 3: Set permission for groups on each project. The Marketing group will be added to the Marketing Project, and given only Viewer rights. The Creators group will also be added to the Marketing Project, and will be given all rights except View and Project Leader (Project Leaders get View rights). In the same way, the Explorers group would be added to the Marketing Project, and given all Explorer rights, except View and Project Leader.

Although the Creators group has lots of permissions on the Marketing Project, it doesn’t have view rights. So members of the Creators group cannot see the Marketing Project, unless they are also a member of the Marketing group. And since rights are cumulative, a member of both Marketing and Creators will receive all the rights they need to create content within the Marketing Project.

As you can see, this is a very simple and efficient way to manage your groups within Tableau.

How do you model your security in your Tableau environment? Have you used a method like this? Let me know in the comments what your thoughts are on this approach.

Hello, Tableau

Three days ago, I started a new job as a Solution Architect for Tableau Software. I’ve been working with Tableau for the past three years, but never expected that I would go to work for Tableau. Needless to say, I’m very excited about this new direction in my career. I’m also excited to be deepening my knowledge of Tableau. I’ve loved working with the product. Nearly all of my experience is with Tableau Desktop, but in this new job, I’ll be assisting Tableau customers with building an enterprise solution that meets their needs.

With this in mind, I can also say that, as I learn more and more about Tableau, I will start writing articles here about Tableau. I still love BusinessObjects, and will continue to keep up with new developments in BusinessObjects. But I don’t expect to be writing much, if anything, about BusinessObjects in the future.

I don’t see Tableau and BusinessObjects as competitors. I think they complement each other nicely. BusinessObjects is an outstanding, and very mature, reporting tool, with some data visualization capabilities. Tableau is an outstanding data visualization tool, with some reporting capabilities. So they work well together.

In fact, I recently worked on a project evaluating tools that allow Tableau to connect to BusinessObjects. Some tools connect to universes, others connect to Webi documents. And some tools do both. It was an interesting project, and may find its way into a future blog post.

Thank you to those who have followed this blog. I’m excited to keep it going, with a new direction.

New Beginnings

Some of you may already be aware of this. Not long ago, I was caught up in a lay-off at Teradata. This was a business decision on their part, and I hold no grudges against them. Teradata is a good company, with a great product line. I was working for the Think Big Analytics division, who decided that BusinessObjects is not a technology with which they wish to work. After all, BusinessObjects is a product of SAP, and SAP is a competitor of Teradata. So, that makes sense. I get that.

For the past 3 years or so, I’ve been working more with Tableau than with BusinessObjects. That’s why I haven’t written any new blog posts in a while. I considered myself a beginner for a long time. But I’ve been honored to be learning this new, and growing technology. In fact, I’ve fallen in love with Tableau. It’s a great product. It’s not perfect, but then, what tool is? But I’ve developed enough skill in Tableau that I have begun to work independently in Tableau. I’ve taken a few consulting and training jobs, and these have gone very well.

I see BusinessObjects as a declining technology, and Tableau as a growing technology. Do you use Tableau in your company? Do you need consulting help, or perhaps some training? I would be happy to help. If you have need of my service in Tableau, or, of course, BusinessObjects, feel free to contact me.

If you use Tableau, tell me what you think of it. What do you like about it? What areas need improvement? I have my own thoughts about this, and will probably share some of them in another post. But I would love to hear from you.

Universe Design Best Practices

I’ve always avoided writing a blog on universe best practices, primarily because there have been many others who have written some excellent blogs on the topic. However, I have had quite a few requests to write about it, and have finally agreed to compile my own list of Best Practices for universe design. After all, the universe is one of the best advantages SAP has over other BI tools. So, a well thought out, well built universe is key to a successful deployment.

SAP BusinessObjects includes two tools for building and maintaining universes: Universe Design Tool (the legacy tool formerly known as Designer) and Information Design Tool (the newer tool, first released with version 4.0). These Best Practices will, for the most part, apply to both tools. Where they don’t, I’ll be sure to point that out.

I categorize all Best Practices as follows:

  • User Friendliness: These are practices that will make a universe easier for end users to use. After all, what we do is for their benefit. Anything we can do to make the universe easier for the users, we should do. Keep in mind that happy users will say nice things about you to your boss. 😉
  • Designer Friendliness: At some point, another developer will take over the universe(s) that you have built. Let’s build our universes in such a way that they won’t have to reverse engineer everything we did, just to figure it out.
  • Database Friendliness: We want our universes to return the correct results, as fast as possible. So, it’s best to design them so that they generate the most efficient SQL for the database.

User Friendliness

A universe is only as good as the users think it is. No matter how brilliantly I design a universe, if the users don’t like it, it’s not a good universe. In fact, good universe design is critical to the success of the entire BusinessObjects deployment. If users don’t like the universes, they will abandon the system, and get the data they need some other way. But the more they like the universes, the more they will use them, and the more they will promote them among their co-workers.

Who is the best person to build a universe? Perhaps a Database Administrator (DBA)? Actually, while a good DBA will be critical to the process, they don’t speak the same language as the users. So, while they can build a universe that is technically excellent, the users won’t be able to figure them out. I usually prefer to have a Business Analyst, who know SQL, be the one to create the universe. It tends to be more user friendly this way. But, in order to do a good job, you must involve other people. So, with that in mind, here is a list of Best Practices for making your universes user friendly.

  • Before you start building the universe, gather a team of power users, perhaps 3 – 5 people, who will assist you in making the universe user friendly. Throughout the process, they can review your design and make sure that folders, objects, descriptions, etc., make sense to them. As an added benefit, these power users will be your marketing team when the universe goes into production.
  • Develop and use a naming convention for everything that the users see. Universe names should reflect the name of the system from which they draw data. Folder and object names should be user friendly. Never use underscores in anything the user sees. And never merge words together, such as PhoneNumber. Put a space between the words. And don’t use abbreviations except where it makes sense to users. For example, use Phone Number, rather than Phone No.
  • Objects should be organized in folders in such a way that it makes sense to the users. Design your folder structure ahead of time, and have your Power User team verify it for you. Do not create a folder for each table. Users don’t care about tables.
  • Within a folder, objects should be arranged alphabetically or hierarchically, depending on the needs of the users. You want it to be as easy as possible for the users to find the objects they need.
  • Never have more than three levels of folders. If users have to open more then three folders to get to the objects, they will get very frustrated. Use separator objects, rather than folders, to organize objects, especially when there are only a few objects. What is a separator object? It’s an object that doesn’t serve any purpose other than to provide visual separation between groups of objects.
    Separator ObjectsIf you create separator objects, make sure that they cannot be used in Results, Conditions, or Sorts.
  • Never create a folder for one object. Find another place for it.
  • If the same object appears in multiple universes, use the same name for that object in all universes. For example, don’t call it Client in one universe, and Customer in another. If it’s the same thing, call it the same thing.
  • Never use the same name for multiple objects within the same universe. If, for example, you have Year in multiple folders, which is probably a bad idea to begin with, give them different names, like Fiscal Year and Calendar Year.
  • Make sure that the universe has a user friendly name and description. Check with your Power User team, to make sure that these make sense to them. And remember, no underscores.
  • For date objects, never, ever create Automatic Time Hierarchy objects. Take the extra five minutes to create them the correct way. If you don’t know what Automatic Time Hierarchy objects are, don’t worry. You don’t need to know. These objects were a bad idea to begin with. In fact, IDT doesn’t even have this feature.
  • Only create objects that are needed for reporting. Don’t create an object just because there’s a column in a table. If it isn’t required, don’t create it. This will keep the universe smaller, and easier for the users to use. You can always add more objects later, as the need arises.
  • Make sure that your contexts have user friendly names and descriptions. And make sure that users aren’t prompted to select a context, unless you want them to have the option to select a context. You can add tables to objects to force them through a specific context. I find this very useful.
  • Lists of Values (LOVs) can be very helpful, or very painful, for the users. Make them as user friendly as possible. Build them from dimension tables, not fact tables. If it’s a large list, use Database Delegated, so the users can easily search for what they need without having to scroll through a long list. Use a static LOV for lists that never, or rarely, change.
  • Never use default hierarchies/navigation paths. They rarely make sense. Create at least one custom hierarchy. If you don’t, the default hierarchies will be made available.
  • Every object, without exception, should have a description. Make sure that the descriptions are approved by your Power User team. For measure objects, include the calculation in the description. Be thorough, but not verbose. Believe it or not, verbose object descriptions can affect query performance. Don’t ask why. I don’t know. But I’ve seen it with my own eyes, and confirmed it with SAP.
  • Format all numeric and date objects in the universe. Of course, users can change the format in the report, if they wish. But if you do it in the universe, users won’t have to reformat the data every time they build a new report.

Designer Friendliness

As a universe moves from one developer to another, we want to make sure that there is sufficient documentation within the universe, so that it’s as easy as possible for a developer to understand the universe. To that end, here is a list of things you can do to make your universes more Designer Friendly.

  • Keep the Structure Pane (UDT) or Data Foundation (IDT) well organized and clean. I like to place fact tables in the middle, and dimension tables to the left or right. I just find it’s easier to identify fact tables if I do that consistently. Don’t have tables overlapping each other. Try to keep tables that are joined together, close to each other. It just makes it easier to see the relationships.
  • If you create alias or derived tables, put comments next to them to explain their purpose.
  • Use a standard naming convention for Alias and Derived tables. I like to prefix alias tables with A_ and derived tables with DT_. This makes them easier to identify. And never use spaces in Alias and Derived table names, or columns. It can make SQL code tough to read.
  • When you create derived tables, write the SQL in such a way that it’s easy to read and understand.
  • In UDT, LOVs should be named the same as the object to which they belong. This makes troubleshooting much easier.
  • There is a comments box in the universe that only developers can see. Use it. It’s a great place to keep notes about the universe that other developers can use. You can use it to explain who built the universe, why things were done the way they were. Be verbose. You can put a lot of information in them.
  • Create a folder in your universe called Universe Release Notes. In that folder, create an object for each release of the universe. The name of the object will be the release date. The description of the object will be the notes for that release. Make sure the entire folder is hidden, so users can’t see it.
  • Always set cardinalities on your joins. It makes it easier to see how tables relate to each other. It also makes the detection routines work more accurately.
  • Only use Derived Tables when necessary. In fact, use them as a last resort. Often times, there’s a better way to do it.

Performance Friendliness

One of the most important things we can do for our users is to make sure that we are creating the universe in such a way that it generate the most efficient SQL for the database. The users will appreciate faster running queries, and so will the DBAs. So, keep these best practices in mind when building universes.

  • Don’t use Aliases to avoid creating contexts. Aliases and contexts have different purposes. Contexts, if properly built, won’t help your queries run faster, but will cause less confusion with the users.
  • Every measure object must have an aggregate function in the Select. No exceptions. If it doesn’t get aggregated, don’t make it a measure.
  • Use Index Awareness wherever applicable. This will allow the queries to search indexed columns, and can sometimes reduce the number of joins and tables in the SQL.
  • Set the ANSI92 parameter to Yes. This will generate more efficient SQL for modern databases. It will also allow the use of full outer joins in the universe.
  • If your database supports it, set the FILTER_IN_FROM parameter to Yes. This will move filters from the Where clause to the From clause and may eliminate some of the processing time at the database.
  • You may also get improved performance by setting the JOIN_BY_SQL parameter to Yes. However, this needs to be tested with some of your largest queries, as it can consume large amounts of database resources in very large queries.
  • In the connection, set the Array Fetch Size to at least 250. But test it at higher levels as well. Depending on the database, and network bandwidth, you could go as high as 1000 and get better performance.
  • Enable Query Stripping in IDT and Web Intelligence. This will remove unused objects from the SQL as it is sent to the database.
  • In the SQL for objects, keep the transformations to a minimum. For example, only use CASE statements when they are really needed. The more you do, the slower the queries will run.
  • Avoid clusters of tables that aren’t joined together. If objects shouldn’t be used together in a query, they should be in separate universes. Otherwise, you risk Cartesian Products.


When it comes to user friendliness, the bottom is this: If there’s anything that I can do in a universe to make it easier for the users, I should do it. If it takes me an hour to do something that will save 5 minutes for a user, then I will do it. If it saves 5 minutes for one user, it will save five minutes for more users. It will be worth the effort to have happy users.

As for designer friendliness, anything I can do to help the next developer understand the universe, I should do. It will also help me when I come back to this universe. It’s good to keep notes in a universe that will refresh my memory as to why things are done the way they are.

And, of course, a universe that performs well will gain appreciation from the users and DBAs alike.

I hope you have found this post to be useful. Feel free to share additional tips in the comments. I love learning from my readers.

Review of Information Design Tool 4.1

It’s been a while since I wrote my initial review on Information Design Tool (IDT) in SAP BusinessObjects 4.0. If you’ve read that post, you know that I was frustrated with missing features in IDT, as compared to Universe Design Tool. Every time I found a missing feature, I opened an Idea on the SAP Idea Place, requesting that the missing feature be added. Well, as it turns out, I was the top contributor for new ideas for IDT 4.1. And nearly all of my requests were delivered in version 4.1. So, I’ve been using IDT 4.1 for quite some time now, and am ready to give my impressions.

The Good News

IDT has some really nice features that I find very helpful when doing universe development.

  • Families: Tables can be grouped into families, and each family can have its own color. I will typically have a color for each fact, and a color for conformed dimensions. Aggregate tables will also have a color. I find it helps to be able to visualize the tables in this way.
  • Multiple Layers: I really like the idea of creating a Data Foundation, then one or more Business Layers on top of it. It avoids the old way of recreating the table structure in multiple universes.
  • Business Layer Views: Once you create a Business Layer, you can create views, in which you choose which folders and objects to expose in each view. You can also create security profiles, allowing groups access to limited views.
  • Standalone Lists of Values and Prompts: Prompts and LOVs can be created once, as standalone objects, and then used in multiple objects throughout the universe. These can be created in the Data Foundation or the Business Layer.
  • Table associations: This may seem minor, but I love being able to see what tables are associated with an object without opening a separate window.
  • Queries: Queries can be built in the Business Layer to test the objects and joins in a universe. This eliminates the need to use another tool, such as Web Intelligence, to test as you go.
  • View Data: We’ve always had the ability to view the data in a table or column, but now we can view the data in multiple columns from different tables. This is a big help when troubleshooting the Data Foundation.
  • Calculated columns: These are columns that can be added to tables, using SQL calculations. This allows you to add a calculated column to a table, and then use it in multiple universes.
  • Bug Free: Well, not really. But compared to IDT 4.0, it’s a huge improvement.
  • Multi-Source Universes: We now have the ability to create a universe from multiple sources. We can create a multi-source universe (MSU) and include tables, for example, from Oracle and Teradata, and join them together.

The Bad News

There are still some things that I struggle with. Perhaps it’s because I’ve spent so much time in Designer, or perhaps it’s just poor design. You decide.

  • Contexts: These aren’t as simple as advertised. In Designer, each join has one of two states, relative to a context. It is either in, or out, of the context. In IDT, it has three possible states. It can be included, excluded, or neutral. My experience is that this makes contexts significantly more complex. I worked on one universe that had 65 aggregate tables, plus two transaction fact tables. Setting up the context became so complex and problematic, that we ended up rebuilding the universe in UDT.
  • Screen Layout: I haven’t gotten used to the way the screen is laid out. I got used to Designer, with it’s individual pop up windows. IDT doesn’t have these, so you get less screen real estate. And, it doesn’t allow anything to be undocked and moved to a second monitor.
  • Custom Properties Tab: You’ll see this on folders, objects, and a few other things. It’s useless. It’s there for whenever we get an IDT SDK. So, in the meantime, there’s nothing you can do with it. So it should be hidden. Of course, that’s just my opinion.
  • Alias Table Names: Again, this may seem minor, but it’s a major inconvenience. When you have an alias of a table, the Alias name, and the original name, are next to each other in the table header, and don’t wrap. So the width of the table in the Data Foundation must be sufficient to fit both names. This can be very annoying, especially with long table names. In UDT, the original name is below the Alias name, so the table doesn’t have to be so wide.
  • Multi-Source Universes: Why is this listed here, as well as under Good News, above? Well, when you create a universe, you must select whether that universe will be a Single-Source, or Multi-Source, universe. Once you make that selection, you can never change it. And, if you choose Multi-Source, all queries will be routed through the Data Federation Engine, even if the universe actually has only one connection. This can slow down performance of that universe.

Should you convert your universes?

That’s a good question. I’m glad you asked. At this point, there is little benefit to the users in converting your old UNV universes to UNX universes. The only reasons you might convert them is for the benefit of universe developers who want to take advantage of some of the features, such as families, or standalone LOVs, or if you need to use these universes with products that don’t accept UNV universes, such as Lumira.

You might wonder if you should convert old universes, so you can turn them into Multi-Source universes. When a UNV universe is converted to a UNX universe, it is converted as a Single-Source universe. And remember, you cannot change a Single-Source universe into a Multi-Source universe. So, don’t convert UNV universes with the hope of making them Multi-Source. It can’t be done.

The last I heard, SAP had promised to support UNV universes until at least 2019. So, there’s little compelling reason to convert them. And if you’re comfortable working in UDT, feel free to continue. However, new universe development should be done in IDT, as much as possible. All BusinessObjects tools can access UNX universes.


Although IDT still has some issues, I like it. IDT 4.1 is a huge improvement over IDT 4.0. Yes, it still has bugs (So does UDT), and it still has room for improvement, but it’s getting better all the time. Let me know your opinion.

Impact of a Database Migration

For the past few years I’ve been involved in quite a few “forklift” projects. What is a forklift project? I’m glad you asked. When a company decides to move their data warehouse from one database platform to another by simply moving, or copying, all the existing tables into the new database, this is called a Forklift. The tables are simply lifted from one database, moved to the new database, and set down there. The data model isn’t changed. That’s a whole other project. But often times, some things change in a forklift project. Data types may be different between two different databases. Or one database may have different reserved names than another, causing table names, or columns names, to be changed. But, for the most part, everything stays the same.

My role in the project is to convert the BusinessObjects universe(s) and reports to the new database platform. I’ve done this enough times, that I’ve developed some standard templates to help me make sure that I don’t miss anything. I don’t want to get to UAT (User Acceptance Testing) and find out something doesn’t work that I forgot to check.

So, what happens in a Forklift project? Well, from a BusinessObjects perspective, we would prefer that the database structure be complete before we start development. This includes the data access layer, if needed. It would be nice to have some data in the tables during development, but we can get started without data.


Sometimes we like to treat documentation like it’s a bad word. After all. it’s so much faster to just do the work, and skip the documentation. But trust me on this: In the long run, documentation will be absolutely necessary. You need to know where you were, and what you’ve done, in case you ever need to revert, or fix, anything in a universe or report.

I always start with a workbook, with a spreadsheet for each part of the universe. I create one workbook for each universe that will be converted. I have a template that I use for each project. The workbook has eight sheets in it, as follows:

  • Parameters: It’s not uncommon to change universe parameters, in order to optimize the universe for the new database. On this tab, we document each parameter that is changed.
  • Tables: Most of the time, the database/owner name will change for each table. And sometimes, table names will change. On this tab, we document the old database/table names, and the new database/table names.
  • Derived Tables: The SQL in each Derived Table will need to be converted from the old database to the new database. On this tab, we document the old SQL and the new SQL. Note that, for very large, or complex, Derived Tables, we may have to use a Text application, such as Microsoft Word, to document the old and new SQL.
  • Objects: On this tab, we document the old and new SQL for each object that needs to have the SQL changed. we don’t document objects that don’t need to be changed.
  • Joins: Any joins that have database functions may need to be modified to work with the new database. On this tab, we document the old and new SQL only for the joins that are changed.
  • Parameters: This tab is only used with UNX universes, to document any changes that need to be made to parameters, if any.
  • Lists of Values: If there are Lists of Values (LOVs) with custom SQL in them, they may also need to be converted. On this tab, I only document the LOVs that have been modified.
  • Other: This is just a tab to document any changes that don’t fit into the other tabs. It might include things like connection and/universe name changes.

I can’t emphasize enough the important of documentation. When you start testing reports, if the values don’t match from the old universe to the new universe, you may need to re-convert the SQL from old to new. Having this documentation will make it easier to check your work. Also, once you have converted snippets of code, you may be able to re-use that with the next universe, without having to convert it again.

Converting the Universe

OK, so you have the documentation ready, so now it’s time to convert the first universe. It would be best if you can freeze development on the universe at this time, so you’re not trying to convert a moving target. A very simple universe can be converted within an hour. A very complex universe may take a week or two to convert. Be prepared, and set expectations accordingly.

I also like to set up a temporary sandbox environment where all universes and reports will be converted. If this isn’t possible, it can also be done in the Development environment. Before you start the steps, below, make sure you create a connection to the new database, and make sure it works. Make sure that the account that you are using to connect to the database, has read permission for all the database objects that will be needed for the universe.

Following are the steps that I use to convert a universe:

  1. In the Sandbox/Development environment, create a copy of the universe. Give it a new name, that reflects the target database.
  2. Open the new universe in Universe Design Tool (UDT) or Information Design Tool (IDT), depending on the type of universe.
  3. Change the connection for this universe to the connection for the new database. Make sure that you test the connection.
  4. Change any parameters that need to be changed, in order to optimize the universe for the new database. Document your changes.
  5. For each table in the universe, change the owner and/or table name, as needed.
  6. Go through each Derived Table, and update the SQL for the new database. If needed, get help from a DBA, so the SQL is written for optimal performance for the database.
  7. Run an integrity check, parsing objects, joins, and conditions. This will tell you what objects may need to be updated with new SQL.
  8. Go through each object, join, and condition, that produced an error in the integrity check. Update the SQL, as needed. Again, make sure that the SQL you write is optimized for the database.
  9. Look for LOVs that might have custom SQL, and modify as needed. If an LOV is market to “Export with universe”, it may have custom SQL.
  10. In a UNX universe, look for parameters that may need to be modified, and fix, as needed.
  11. Save and export the universe.

Reports that are associated with this universe will need to be converted from the old universe to this new universe. Once you have done that, test the reports to make sure they run successfully. If they throw errors, work to find out why, and fix those issues. It’s a good idea to test the same reports against the old and new universes. The data may not match at this point, depending on what database you are connected to (Development or Production), and the status of loading data into the new tables. I’ve had times when I tested a report against the new universe, only to get errors, then later found out that the same report got the same errors against the old universe. In that case, just leave it alone.

Converting Reports

Do you have reports with custom SQL in the queries? If so, you will need to convert those queries manually to the new database. Just like you did with the Derived Tables in the universe, you will want to get help from a DBA to convert the SQL, making sure that it’s optimized for the database. Large, complex SQL queries may be tricky to convert, so take your time, and make sure that it is converted correctly. If the query is very complex, I like to break it down into small pieces, and convert each piece, one at a time.


Once you have all the universes and reports converted to the new database, you’re ready to start User Acceptance Testing (UAT). The primary goal of UAT is to make sure that reports return the same results against the old and new databases. If the results are different, don’t assume that the problem is with the universe. Make sure that you check the database, as well, making sure that the data matches between the old and new databases. If it does, then check the universe, and make sure everything used for that report was converted correctly. Make sure that you update your documentation each time you make a correction in the universe.

You’re all done. It’s time to relax in the sun with your favorite drink. That is, until the next project comes along.

Returning After a Long Absence

It’s been a long time since I’ve posted anything on this blog, and for that, I apologize. For the last year, I’ve been involved in several projects that haven’t done much to inspire new posts here. This year, 2015, I promise to post more, as I learn new tips and techniques with BusinessObjects.

I’ve also been bad in approving comments over the past ten month, or so. And for that, I also apologize. I’ve now gotten caught up on approving comments, and have even replied to a few. I promise to be more attentive to comments. I appreciate all of them. Well, at least the ones that aren’t spam.

I’m planning some new posts, and should have some ready soon. But I’d also like to get some ideas from others. If you have any thoughts on topics that you would like to see covered here, feel free to comment to this post. I will give serious consideration to all ideas.

Thank you for sticking with me though this last year of silence. It’s time to break that silence, and start making some noise again. Who’s with me?

Optimizing Universe Lists of Values

Lately, I’ve seen a lot of heartache over universe Lists of Values (LOV). Lists of Values can be a great benefit to users, as well as your worst nightmare for developers and DBAs. So, I thought it might be handy to document some of the best practices for keeping your LOVs user friendly and performing well. Let’s start by going over the various options for LOVs, just so we have a good understanding of what is available, and what makes sense.

There are two times when a user may see a list of values for an object. When they are building a filter in a query, based on a universe object, they may want to choose values for the filter from a List of Values. Or, when a prompt is applied to an object in a query filter, the user may see a LOV at run time when the prompt window appears. Each object in a SAP BusinessObjects universe has the below options available for LOVs. Make sure you put some thought into these options for every object in a universe.

In Information Design Tool (IDT) LOVs can be created in the Data Foundation or in the Business Layer, as standalone objects. If you create the LOV in the Data Foundation, it can be used with any object, in any Business Layer built on that Data Foundation. You can create a LOV using custom SQL, or by importing or creating a static list.

If you create the LOV in the Business Layer, you can use that LOV with any object in that Business Layer only. However, in addition to creating a LOV with custom SQL, or making a static list, you can also create a LOV using a query panel. We’ll talk about static lists later.

Associate a List of Values

This check box determines if the object will have a List of Values. If this box is not checked, the users will not be able to choose from a list when selecting values in a query filter or query prompt. Many dimension or detail objects will need a LOV. However, there are many objects that shouldn’t have a LOV. For example, measure objects don’t need LOVs, as it doesn’t make sense for them. Date/Time objects should not have LOVs. In fact, Date objects usually won’t need a LOV, as the user can select from the calendar widget, or type in the date they want.

What about objects with very large LOVs? For example, SKU Number could have millions of distinct values. So it may not make sense for that object to have a LOV. However, there may be a way to provide a LOV for SKU Number, while keeping the list manageable. Stay tuned.

Allow users to edit this list of values

This is an amusing one, as it tends to scare many developers. Fear not, this isn’t a bad option. It’s just a somewhat obsolete option. Let me explain. When a user of Desktop Intelligence (Deski) creates a query, the universe and associated LOV files are downloaded and stored locally on the client’s machine. Deski allows users to edit their own copy of the LOV for an object. For example, they can add columns, or filters, to the query panel that generates the LOV, so it only returns the values that interest them. This is a good thing.

Deski is currently the only product that allows users to edit their LOVs. I suspect that eventually Web Intelligence (Webi) will also have this feature. But for now, only Deski. And since Deski doesn’t exist in 4.x, this options is currently obsolete in 4.x. So, there’s no need to un-check this option.

Automatic refresh before use

This is one of those options that can be good, or bad. If you have a small list, that is fairly dynamic, such as Employee Name, it might be good to use this option. However, make sure that the list can be generated in two seconds or less. You don’t want users waiting for large lists to generate every time they use them. If the list is very large, this can cause significant pain for the users. Use this feature with caution.

On the other hand, if the list is fairly static, don’t use this option. For example, a LOV for State Name isn’t likely to change any time soon. So there’s no need to refresh it on every use.

Hierarchical Display

This is another feature that was designed for Deski. If you have a LOV that is based on data that is part of a hierarchy, you can add the other objects of the hierarchy to the query for the LOV, and have them displayed in hierarchical folders. For example, a LOV for City can be displayed in a hierarchy that starts with Country, then State or Region, then City. This feature only works well with Deski. So, if you’re not using Deski, don’t use this.

Export with universe

The LOV is not stored in the universe. It is stored in a separate file with a .lov extension. If you make any edits to the LOV, those edits are also stored in the LOV file. For example, if you have modified the query that generates the LOV, that query is stored in the LOV file. Or, if you have created a static LOV, that data is stored in the LOV file.

So, in order for the users to benefit from the changes you have made, you will need to export that LOV file along with the universe, into the repository. Also, if you have applied Index Awareness to an object (that’s a topic for another post), the indexes are stored in the LOV file. You will need to export that LOV with the universe.

Delegate search/Allows user to search values in the database

This is an option that can be beneficial for large lists. Looking at the earlier example of SKU Number, which could have millions of values, you can use this option to provide the users with a LOV, but without having to generate a huge list.

Here’s how it works. When you turn on this option, and a user attempts to access the list of values for an object, they will see a blank list, and a set of instructions telling them to enter a search string, including wild card characters. For example, a user could enter “XYZ%” and they will get a list that lists all values that start with XYZ.

This is called Delegate Search because you’re allowing the users for search for what they need, and you are delegating that search to the database, rather than searching the LOV file for specific values.

Note that, in IDT, there is a related option called Force users to filter values before use. This option allows users to enter search values to filter the existing LOV.

List Name

This is the name of the .lov file for this list of values. In earlier versions of BusinessObjects, we were limited to eight characters, and no spaces. But times have changed. You can now create a file name that has more than eight characters, and includes spaces. I like to give it a name that relates to the object, or the data in the LOV. That way, it’s easier to find the file if I ever have to do troubleshooting.

In the new universe tool, Information Design Tool (IDT), LOVs can be standalone objects that can be reused for multiple objects. But did you know that you can use one LOV for multiple objects in Designer as well? For example, let’s say you have an object that will return only the values of Yes, No, or Maybe. So, you generate a static LOV file, and name it YesNoMaybe. Make sure you check the “Export with universe” box. Now, if you have other objects that will return the same three values, enter the same name for those object’s LOV, but don’t check the “Export with universe” box for those. You only need to check it for one object. In this way, all objects with the same name for their LOV will share the same LOV file.

You will notice that the default file name is pretty cryptic, especially if the object was created by using the New Object button. If you ever want to return to the default name (I have no idea why you would want to), click the Restore Default button.


If you want to customize the list for an object, click the Edit button. This will open the Query Panel for this universe. You can use the Query Panel to customize the query, or you can click the SQL button on the Query Panel tool bar to write custom SQL for this LOV.

If you want to create a hierarchical list (discussed earlier), make sure that you leave the current object, as it is, in the Query Panel, and add the rest of the hierarchical objects after that, in hierarchical order. For example, if you want to create a hierarchical list for City, then City must be the first object in the Query Panel. Then, after that, add the rest of the hierarchy from top to bottom. So, the objects in the Query Panel might be as follow, in this order:

City – Country – State – County

I know, that’s counter intuitive, but trust me. It works. Also, make sure that you apply sorts to each of these objects in the query, in the order of the hierarchy.

You can also use custom SQL to create a list with values that don’t appear in the database. For example, maybe to want your City LOV to include a “All” value, in addition to the list of cities. So you might write custom SQL as follows:

Select Distinct City
From dbo.location
Select ‘ All’
From Dual

Obviously, this is Oracle SQL, but it will work with any database with minor modifications. Notice that I put a Space in front of the word “All”. This will make sure that All is at the top of the list. This allows you to use an old trick, which allows users to select All from the list to get All cities in their results.

When you edit the query for a LOV, the Export with universe button is automatically checked. Don’t un-check it. 😉


This button is more important than it may seem. When you click this button, the list is generated, and displayed. In other words, the query for this LOV is run, then the results are displayed. Of course, the query is only run if you haven’t run the query before, or if you have the “Automatic refresh before use” option turned on. Otherwise, this button will simply display the existing list.

So why is this button so important? Well, if you have the “Export with universe” option checked, then you need to generate the LOV file in order to have it exported with the universe. If you don’t generate the file, then there is nothing to export.

Query Limits

In IDT, you have the option of setting limits on the query that generate the LOV. You can set a time limit, as well as a row limit. By default, neither of these are turned on. I would not suggest setting a row limit. There’s nothing more frustrating to a user than a partial LOV.

Static Lists of Values

A static LOV is a list that never changes, unless you take manual action to change it. The most common method of creating a static LOV is to import a file, such as an Excel spreadsheet, or a text file. Depending on which tool you’re using, the method, as well as the options, will vary.

So let’s start with Designer (a/k/a Universe Design Tool). If you want to create a static LOV, you start, of course, by creating a text file, or spreadsheet. Simply create a file with the needed values in a column. Make sure that the first row is the name of the object, or list.

Next, go to Tools – Lists of Values – Edit a list of values. Find the object for which you want to create the static list, and click on it. Then, at the bottom of the window, click the Personal Data radio button.  This will open the Access Personal Data window. Browse for the file. Make sure you select the “First row contains column names” option. Then click Run. This will generate the LOV file from the text file or spreadsheet.

In IDT, you have the additional option of entering the list manually, simply by adding rows and typing the in values. In the Data Foundation or Business Layer, go to the Parameters and Lists of Values section. Next, click the New button at the top of the Lists of Values window, and select Static list of values. If you want to create the list here, click the Add Column button on top to add additional columns, or click the + button on the right to add additional rows. You can change the name of the column by double-clicking on the column header.

If you want to import an existing file, click the Import button, and browse for the file. Complete all the relevant options, including “First row contains column names”. When you click OK, the list is imported.

Applying a LOV to an Object

In IDT, once you create a LOV, you must then apply it to one or more objects. To do this, go to the Business Layer, and select the object. Next, click the Advanced tab, and you will see the LOV options on the right. Make sure that Associate a List of Values is selected. Then, click the little ellipses button on the right of the field on the right. Select the appropriate LOV and click OK. Note that if you click the X button to the right of the LOV name, it will remove that LOV, and the object will simply use the default “Select Distinct” LOV.

Sourcing a List of Values

Where do you get the data for a LOV? Well, ideally, from a small dimension table. But what happens when you have a dimension object built against a large fact table, with millions, or billions of rows? Well, by default, the LOV will be generated by running a “Select Distinct” against the fact table. This can take a significant amount of time to populate.

So the first thing you want to do is to see if the same values are available in a dimension table. If so, re-point your LOV to the dimension table. Or, if possible, re-point the object to the dimension table. But, at the very least, get the data for the LOV from the dimension table. However, if the data is not available in a dimension table, talk to your DBA, and see if they can create a reference table with the distinct values you need for the LOV. If the list is fairly dynamic, the reference table can be updated with the nightly ETL process.

The bottom line is that you want to avoid getting LOVs from fact tables. Users will get very frustrated waiting minutes for the LOV to populate.


As you can see, there’s a lot more to a List of Values than meets the eye. There are many options, and getting them right can make a significant difference for the users. Take the time evaluate each object, and set the correct options that make sense for that object. Let me know if you have additional tips for Lists of Values.