Sum Prior Months into Current Month (Webi)

Yes, this was an actual request, and when it came in, I honestly wasn’t sure if it was possible. Open orders are allocated to the month in which they are scheduled to ship. But some orders, scheduled to ship in past months, for whatever reason, didn’t ship when scheduled. They are still open. So the client wanted the open order amounts for prior months to be summed into the current month, while the amounts for future months will stay as scheduled. Well, I hate to say no to any request, so I went to work on it, and, with the use of a flag variable, and calculation contexts, I was able to make it work.

Let’s imagine that the raw results from our query looks like the following, and the current month is January, 2020.

As you can see, there is quite a bit of data for the prior months. But I need all open_amounts from previous months to be brought forward into the current month. To begin with, I needed flag variable that would flag the months, as needed. Flag variables are very handy for many purposes, and most of the time, will return one of two values. However, in this case, I need a three way flag variable:

  • If the month is past, flag it with a 1.
  • If the month is the current month, flag it with a 0.
  • If the month is a future month, no flag is needed.

so, to start with, I created a dimension variable called Completed Months Flag. Here is the formula:

=If([invoice_month] < MonthNumberOfYear(CurrentDate()) And [invoice_year] <= Year(CurrentDate());1;


If([invoice_month] = MonthNumberOfYear(CurrentDate()) And [invoice_year] <= Year(CurrentDate());0)))

Let’s break this down. The first line will return a 1 for all previous months in the current year. So, this being January, this part will not be returning any results for any one row.

The second line will return a 1 for all previous months in prior years. In this case, it will return a 1 for the months of July – December, 2019.

The third line will return a 0 only for the current month, January, 2020.

When I add the flag variable to the table, I see the following results.

So now, I know which months need to be summed up (those months marked by a 1), and where they need to be summed up (the month with a 0). So now, I need to take advantage of calculation contexts, and create the measure variable, called Open Forward, that will sum the prior months into the current month. Here is the formula.

=Sum(If([Completed Months Flag]<>1;Sum([open_amount] In Block Where([Completed Months Flag]=1)))) Where([Completed Months Flag] = 0) + Sum([open_amount]) Where([Completed Months Flag] <> 1)

If you think this looks complicated, you’re right. So let’s walk slowly through this formula, part by part.

We start with an If function. If the flag is not equal to 1, that is, the current and future months, then sum the open_amount for the entire block, but only where the flag does equal 1, that is, all previous months. But only do this where the flag is equal to 0, that is, the current month. Let me restate that. Wherever there is a 1 for the flag variable, sum up all values in the block, and place that sum on the row where the flag is equal to 0. So all previous months values are summed into the current month. If we stopped there, we would get the prior months summed to the current month, but all future months would be left blank, and the open_amount for the current month would also be blank. So that leads us to the final part. This one simply adds the sum of the open_amount where the flag does not equal 1. So each month, from the current month going forward, simply displays the open_mount for that month. In the case of the current month, that amount is added to the sum of the previous months. With a little formatting, we end up with a table that looks like this:

Of course, you can remove the flag variable, and the open amount columns from the table, and you have the final results. Needless to say, I was pretty excited to actually make this work, and the client was quite pleasantly surprised. I also made a version of this in Tableau, and will have a post on that soon.

Impact of a Database Migration

For the past few years I’ve been involved in quite a few “forklift” projects. What is a forklift project? I’m glad you asked. When a company decides to move their data warehouse from one database platform to another by simply moving, or copying, all the existing tables into the new database, this is called a Forklift. The tables are simply lifted from one database, moved to the new database, and set down there. The data model isn’t changed. That’s a whole other project. But often times, some things change in a forklift project. Data types may be different between two different databases. Or one database may have different reserved names than another, causing table names, or columns names, to be changed. But, for the most part, everything stays the same.

My role in the project is to convert the BusinessObjects universe(s) and reports to the new database platform. I’ve done this enough times, that I’ve developed some standard templates to help me make sure that I don’t miss anything. I don’t want to get to UAT (User Acceptance Testing) and find out something doesn’t work that I forgot to check.

So, what happens in a Forklift project? Well, from a BusinessObjects perspective, we would prefer that the database structure be complete before we start development. This includes the data access layer, if needed. It would be nice to have some data in the tables during development, but we can get started without data.


Sometimes we like to treat documentation like it’s a bad word. After all. it’s so much faster to just do the work, and skip the documentation. But trust me on this: In the long run, documentation will be absolutely necessary. You need to know where you were, and what you’ve done, in case you ever need to revert, or fix, anything in a universe or report.

I always start with a workbook, with a spreadsheet for each part of the universe. I create one workbook for each universe that will be converted. I have a template that I use for each project. The workbook has eight sheets in it, as follows:

  • Parameters: It’s not uncommon to change universe parameters, in order to optimize the universe for the new database. On this tab, we document each parameter that is changed.
  • Tables: Most of the time, the database/owner name will change for each table. And sometimes, table names will change. On this tab, we document the old database/table names, and the new database/table names.
  • Derived Tables: The SQL in each Derived Table will need to be converted from the old database to the new database. On this tab, we document the old SQL and the new SQL. Note that, for very large, or complex, Derived Tables, we may have to use a Text application, such as Microsoft Word, to document the old and new SQL.
  • Objects: On this tab, we document the old and new SQL for each object that needs to have the SQL changed. we don’t document objects that don’t need to be changed.
  • Joins: Any joins that have database functions may need to be modified to work with the new database. On this tab, we document the old and new SQL only for the joins that are changed.
  • Parameters: This tab is only used with UNX universes, to document any changes that need to be made to parameters, if any.
  • Lists of Values: If there are Lists of Values (LOVs) with custom SQL in them, they may also need to be converted. On this tab, I only document the LOVs that have been modified.
  • Other: This is just a tab to document any changes that don’t fit into the other tabs. It might include things like connection and/universe name changes.

I can’t emphasize enough the important of documentation. When you start testing reports, if the values don’t match from the old universe to the new universe, you may need to re-convert the SQL from old to new. Having this documentation will make it easier to check your work. Also, once you have converted snippets of code, you may be able to re-use that with the next universe, without having to convert it again.

Converting the Universe

OK, so you have the documentation ready, so now it’s time to convert the first universe. It would be best if you can freeze development on the universe at this time, so you’re not trying to convert a moving target. A very simple universe can be converted within an hour. A very complex universe may take a week or two to convert. Be prepared, and set expectations accordingly.

I also like to set up a temporary sandbox environment where all universes and reports will be converted. If this isn’t possible, it can also be done in the Development environment. Before you start the steps, below, make sure you create a connection to the new database, and make sure it works. Make sure that the account that you are using to connect to the database, has read permission for all the database objects that will be needed for the universe.

Following are the steps that I use to convert a universe:

  1. In the Sandbox/Development environment, create a copy of the universe. Give it a new name, that reflects the target database.
  2. Open the new universe in Universe Design Tool (UDT) or Information Design Tool (IDT), depending on the type of universe.
  3. Change the connection for this universe to the connection for the new database. Make sure that you test the connection.
  4. Change any parameters that need to be changed, in order to optimize the universe for the new database. Document your changes.
  5. For each table in the universe, change the owner and/or table name, as needed.
  6. Go through each Derived Table, and update the SQL for the new database. If needed, get help from a DBA, so the SQL is written for optimal performance for the database.
  7. Run an integrity check, parsing objects, joins, and conditions. This will tell you what objects may need to be updated with new SQL.
  8. Go through each object, join, and condition, that produced an error in the integrity check. Update the SQL, as needed. Again, make sure that the SQL you write is optimized for the database.
  9. Look for LOVs that might have custom SQL, and modify as needed. If an LOV is market to “Export with universe”, it may have custom SQL.
  10. In a UNX universe, look for parameters that may need to be modified, and fix, as needed.
  11. Save and export the universe.

Reports that are associated with this universe will need to be converted from the old universe to this new universe. Once you have done that, test the reports to make sure they run successfully. If they throw errors, work to find out why, and fix those issues. It’s a good idea to test the same reports against the old and new universes. The data may not match at this point, depending on what database you are connected to (Development or Production), and the status of loading data into the new tables. I’ve had times when I tested a report against the new universe, only to get errors, then later found out that the same report got the same errors against the old universe. In that case, just leave it alone.

Converting Reports

Do you have reports with custom SQL in the queries? If so, you will need to convert those queries manually to the new database. Just like you did with the Derived Tables in the universe, you will want to get help from a DBA to convert the SQL, making sure that it’s optimized for the database. Large, complex SQL queries may be tricky to convert, so take your time, and make sure that it is converted correctly. If the query is very complex, I like to break it down into small pieces, and convert each piece, one at a time.


Once you have all the universes and reports converted to the new database, you’re ready to start User Acceptance Testing (UAT). The primary goal of UAT is to make sure that reports return the same results against the old and new databases. If the results are different, don’t assume that the problem is with the universe. Make sure that you check the database, as well, making sure that the data matches between the old and new databases. If it does, then check the universe, and make sure everything used for that report was converted correctly. Make sure that you update your documentation each time you make a correction in the universe.

You’re all done. It’s time to relax in the sun with your favorite drink. That is, until the next project comes along.

Differences Between Desktop Intelligence and Web Intelligence

What are all the differences between Desktop Intelligence (Deski) and Web Intelligence (Webi)? The question has been asked so many times, including from clients, that I decided to attempt to put together a comprehensive list of the differences. Why is such a list important? Well, with the release of SAP BusinessObjects 4.0 in September 2011, Desktop Intelligence is no longer available, and is now completely replaced by Web Intelligence. So, for organizations that use Desktop Intelligence, it’s important to know what those differences are, as they transition from Deski to Webi.

The differences can be divided into two categories:

  • Deski features that are missing from Webi
  • Webi features that never existed in Deski

Fortunately, that first category is getting shorter with each new release of Webi. However, since that category is most important to those migrating from Deski to Webi, we’ll cover that category first.

Deski Features that are Missing from Webi

  • Display a table in multiple Columns: In Deski, you can set a table so that, when it reaches the bottom of a page, it “wraps” to the next column on the same page, similar to a how a newspaper column continues on the same page.
  • Grouping button: Deski has the ability to create grouped variables with the click of a button. In Webi, we have to write the If – Then – Else logic manually to create such functionality. This can be quite tedious. I have submitted an idea to SAP to have this functionality added to Webi. You can vote for this idea here.
  • Freehand SQL queries: I’ve never been a big fan of this, as it defeats the whole purpose of having universes, and also creates maintenance nightmares. But there is some value for it in a test environment. This idea has been submitted to SAP here, and it is currently under review.
  • Save as RTF, TXT, HTML: Currently, Webi can save as Excel, PDF, or Webi only.
  • Edit List of values: In Deski, users could edit a List of Values for a universe object, so that it would only display the values that are valuable to them.
  • List of values in formula editor: In Deski, when creating a formula, you can select values for a particular object, from a list, provided in the formula editor. This idea has been accepted by SAP, and should appear in a future version of Webi.
  • Center on page: Deski has a Page Layout option to center a block, or cell, on the page, between the margins.
  • VBA Macros: Deski support Visual Basic macros. I don’t know if Webi ever will support this, but I suspect it will support some kind of programming language in the future.
  • Delete multiple tabs simultaneously: This seems like such a simple thing, but Webi still doesn’t have this capability.
  • Export multiple documents simultaneously: That’s right. In Webi, we can only export documents to the repository one at a time.
  • Create a variable in it’s current context: In Deski, we could create a variable from a formula, and lock in the current context of the formula, even if it was not explicitly included in the formula.
  • Support for all installed fonts: Webi uses it’s own font library, not the Windows fonts. So, unlike Deski, we can’t add additional fonts to Webi. An idea has been submitted to this here.
  • Schedule to a printer: I realize that printing is becoming more and more obsolete, but it isn’t there yet. Deski reports could be scheduled to print automatically. Webi reports cannot.
  • Automatic custom sort for month names and weekday names: Deski has built in custom sorts for Months and Weekdays.

So there you have it. As far as I know, that’s a complete list of features that Webi is still missing from Deski. If you know of more, please mention them in the comments below.

Webi Features that never existed in Deski

  • Track Data Changes: Allows you to see the difference between two different refreshes of a document, and compare the results.
  • Hyperlinking Wizard: You can create an OpenDocument link in about 5 minutes, as opposed to the days or weeks spent doing this in Deski.
  • Most built in formulas: Webi has more built-in formulas than Deski. These formulas come in the way of standalone cells that you can simply drag onto your report.
  • Save as Excel with 65K+ rows: You can save a Webi document to the Excel 2007 format, which allows for over 1 million rows in a single worksheet.
  • Add temporary values to custom sort: When creating a custom sort, you can add values that are missing, to the sort, so that, when they eventually show up on the report, they will sort in the correct order.
  • Box chart: A new chart in Webi that allows you to see a measure at five different data points across a dimension.
  • Tag cloud chart: A new chart in Webi that allows you to easily compare two measures against a single dimension.
  • Heat map chart: A new chart in Webi. It’s the equivalent of a crosstab table, but in chart format.
  • Bubble chart: A new chart in Webi that compares two measures against a single dimension, in an easy to read format.
  • Optional Prompts: Prompts created in the query can be marked as optional, allowing users to ignore the prompts, if they wish.
  • Display query summary on report: Webi has a QuerySummary function that allows you to easily display information about the query on the report.
  • Inner join between queries: Webi’s Merge Dimension feature allows you to choose how the data is displayed on the report. It doesn’t have to be a full outer join, like in Deski,
  • Query Drill: Allows you to keep your data set small, while drilling through large amounts of data.
  • Drill in InfoView: Webi document, of course, can enter drill mode in InfoView (BI Launchpad). Deski documents cannot.
  • Edit in InfoView: Webi documents can be easily edited in InfoView (BI Launchpad), by anyone who has permission.
  • Alternate row shading: We used to have to do some slight-of-hand with variables to get this to work in Deski. But Webi has it built in.
  • Input Controls: This allows for user interaction in the reports, making What-if analysis quite simple. It also make it easy to build powerful dashboards in Webi.
  • Advanced formatting rules: That’s the new name for Alerters. Webi’s Formatting Rules editor allows for more powerful options, including using formulas in the rule.
  • Publish as a BI Service: Individual blocks in Webi can be published as a Web Service that can be consumed by other BI applications.
  • Compatible with Live Office: Report parts form Webi documents can be consumed by Live Office, allowing integration with MS Office.
  • Compatible with BI Widgets: Webi report parts can be consumed by BI Widgets, allowing them to be dropped onto the desktop as standalone objects.
  • BEx queries: Webi can run queries directly against BEx queries.
  • Query stripping: Allows for the automatic removal of objects from a query against an OLAP cube, that do not directly contribute to the report(s).
  • Synchronize drill on report blocks: When drilling on one report block, other blocks on the same report can drill at the same time, staying in sync.

As far as I know, that’s everything that Webi offers over and above what Deski had. If you know of any others, feel free to mention them in the comments, below.


So there you have it. By my count, Webi wins, 24 to 14. While there are still some features missing from Webi, there aren’t that many, and hopefully, none of them will be show stoppers for you.

Impressions of Web Intelligence 4.0

SAP BusinessObjects Web Intelligence 4.0 has been Generally Available since September, 2011, and has generated a significant amount of buzz in the BusinessObjects community. I have no plans to create more buzz. My only desire here is to give an honest assessment of the new version. And by honest, I mean I will share what I have seen, both good, and bad.

Ribbons are for Packages

With this version, SAP has gone the way of Microsoft, replacing toolbars with ribbons. I’ve been using the ribbons in MS Office for several years now, and I still prefer the old menus and toolbars. The same is true for Webi. I don’t like the ribbons, for several reasons:

  • I have to remember which ribbon to select, for the feature that I need. If I don’t remember, then I end up hunting through ribbons. This wastes time. I don’t like to waste time. I can easily remember which ribbon to select for the commonly used features. But for the less used ones, I rarely remember which ribbon to choose.
  • Operations that used to take one click (simply click the button) now take two clicks. Click the appropriate ribbon, then click the appropriate button.

I’m sorry, but ribbons don’t make me more productive. They only slow me down.

Charts and More Charts

Web Intelligence now shares a common chart library with other tools. That was a smart decision. It creates a common look and feel across the various tools. But it also includes a host of new charts that we’ve never had before. Here are a few examples.

Box Chart:

The box chart shows the following five different values for a pair of dimensions, with a measure:

  • Minimum
  • Maximum
  • First quartile
  • Third quartile
  • Median

Users can mouse over the chart to see the values.

Column chart:

Yes, we’ve always had column charts, but now we can make the columns round, and we have many more powerful options for these charts.

Tag Cloud Chart:

The Tag Cloud chart allows the comparison of two different measure against a dimension.

Heat Map Chart:

The Heat Map chart is the charting equivalent to a crosstab table. It displays the intersection of two different dimensions with a measure.

Bubble Chart:

The Bubble Chart allows for the comparison of two measure against a common dimension.

Some of the new charts looks quite useful. But, in addition to the new charts, we have a host of new features for the charts. There are far too many to mention. But, to name a few, you can base the chart title on a formula. You can also use gradient colors on charts.

Unfortunately, we’ve lost the ability to create custom color palettes. I’m disappointed with that.

Improved Compatibility with MS Excel

We now have the ability to save a Webi document to the new .xlsx format. When we do, we can save over a million rows of data to a single worksheet. Of course, we still have the ability to save in the old format (.xls) as well. In that case, if you have more than 65,636 rows, it gets split into multiple worksheets. So now you have a choice as to which format to choose.

What Happened to Tracking?

I don’t know how this was missed in UAT, but the Data Tracking feature, which was introduced in XI 3.x, is broken in 4.0. Here’s the issue: If you use Data Tracking, the only rows that will be formatted by the Data Tracking feature, are the rows that are also formatted by the Alternate Row Shading feature. Seriously.

So, if you use Data Tracking, here’s the trick: Set your Alternate Row Shading frequency to 1. The Data Tracking works as expected. Yes, SAP is aware of this bug, and plans to issue a fix in SP04, which is currently scheduled to be out in July 2012. It is being tracked on ADAPT01618287.

What’s in a Name?

Some things have been renamed in Webi 4.0. Some of the new names make sense, others do not. The problem, of course, is that, with the new Ribbons, buttons have been moved around, and features have been renamed. This makes it that much more difficult to find what you need.

Alerters have been renamed to Formatting Rules. That makes sense.

The three axis of a 3D chart have also been renamed:

  • X Axis is now called Category Axis (This makes sense)
  • Y Axis is now called Value Axis (This one makes sense, too)
  • Z Axis is now called Region Color (Huh? What were they thinking?)

Since the rest of the world still refers to these as the X, Y and Z axis, I have no idea why SAP decided to go against the grain.

The Data Manager is a Welcome Addition

One of the complaints from people moving from Desktop Intelligence to Web Intelligence has been the lack of a Data Manager. In Deski, we could always see the raw data, and information about the data in the Data Manager. Now Web Intelligence has the same feature. It’s a big improvement. The only thing missing, that Deski had, is the ability to see metadata for multiple past refreshes.

Hide and Go Seek?

For a long time now, I’ve missed some of the hide capabilities in Deski. They’ve now been added to Webi. 🙂

We can now hide a block, based on a formula. If the formula evaluates to true, the block is hidden. Otherwise, the block is shown.

We also now have the ability to hide columns in a block. Simply right-click in a column, and hide it. So simple. In fact, it’s easier than Deski. 🙂

Don’t Have a Fit

Fit to Page has finally been added. That’s another feature that former Deski users have been begging for, and now we have it. Thank you, SAP!


As you can see, not all news is good news with Webi 4.0. But SAP has made significant strides in adding in the missing functionality from Deski. There is still room for improvement, but we’re getting closer. Look for another post detailing the differences between the two products.

If you’ve been using Webi 4.0, what is your opinion?

Migration from Deski to Web Intelligence

As you probably know by now, with the release of SAP BusinessObjects Business Intelligence 4.0 (BI4), Desktop Intelligence (Deski) is officially retired. Of course, we’ve known that this day would come for nearly ten years, and some people started early in migrating their Deski documents to Web Intelligence (Webi). But many also delayed until the last minute, waiting for Webi to mature a little more. But the time for waiting is over, and it’s now time to put Deski behind us. So, to that end, I’d like to share some experience in converting those Deski documents to Webi.

The Right Tool for the Right Job

Most of your Deski documents can be converted to Webi with the help of the Report Conversion Tool. If you haven’t used this tool yet, try it out with one of your Deski Documents, and see how it does. I’ve used it many times, with varying degrees of success. Sometimes it converts complex documents, with no issues, much to my surprise. Sometimes it fails to convert very simple documents, much to my dismay. 😦

So, my first piece of advice is, if it fails to convert a simple document, don’t waste time trying figure out why. Simply rebuild the document, from scratch, in Webi. I hate to admit it, but I learned this lesson the hard way.

I hate failure, and am usually not willing to give up easily. One time, I ran a Deski document through the Report Conversion Tool (RCT), and it failed. I couldn’t figure out why. The document had one query, one table, and one report, with only one simple variable. I spent two hours trying to figure out why it wouldn’t convert successfully. Then it dawned on me. I could rebuild it in Webi in five minutes. It wasn’t worth the time.

So, for simple reports, try it once with the RCT. If it fails, simply rebuild the document.

Did it Work?

The RCT has three potential outcomes for each document:

  • Converted Successfully
  • Partially Converted
  • Not Converted

If the conversion is successful, compare the Deski version with the Webi version. Make sure the formatting is correct. Many times, it will need some touching up. And make sure the data is the same. Not all functions work exactly the same, so you may need to modify the formulas in some variables to get the correct results.

If the document is partially converted, compare the two, and see what didn’t convert. For example, if your Deski document has any macros in it, they won’t be converted, as Webi doesn’t support that feature. You may need to get creative to find another way to get the same results.

If the document is not converted at all, the RCT will tell you why. Then you get to decide if it’s easier to try to solve the problem, or simply rebuild the document.

Freehand SQL

I’ve struggled a long time with this. Best practices tell us that we should never be using Freehand SQL to build reports. It defeats the purpose of using BusinessObjects. It is very difficult to maintain. And it often violates corporate security policy. But that hasn’t stopped people from building these reports. Sometimes it’s just easier to write the SQL by hand, rather than modify the universe to accommodate the report’s requirements. I get that.

So what do we do with these documents? Well, the good news is that you can use the RCT to convert them to Webi documents. However, since Webi doesn’t support Freehand SQL, how does it work? The RCT will convert the document by generating a universe with a derived table in it. The Freehand SQL from the Deski document will become the derived table in the universe. If the document has multiple Freehand SQL data providers, each will become an individual derived table in the same universe.

In fact, one such universe will be generated for all Deski documents that use the same database connection, and have Freehand SQL data providers.

But don’t leave it there. Once converted, start migrating these new Webi documents to real production universes. Simply change the universe for each query in the Webi document from the auto-generated universe, to your production universes. Finally, after you have all the new Webi documents attached to production universes, you can delete the auto-generated universe.

By the way, this also applies to Webi queries that were built with universes, but then the SQL was edited.

Marketing and Political Unrest

One of the most challenging tasks in converting from Deski to Webi is the fear of change coming from the user community. Let’s face it, once you get good at a tool, you don’t want to be forced to change to another tool. So, we need to make this as easy as possible for the users. This starts with two very important jobs:

  • Management Support: You have to have management behind this decision. If they aren’t, you can find yourself fighting political battles, and not always winning.
  • Marketing: You need a campaign to “sell” Webi to the user community. You want them on your side.

Both of these are interconnected. If the user community screams loud enough, management will not support the project. So, get a few key, influential users, who are respected throughout the organization, involved in the project. They will be your marketing evangelists among the rest of the community. They can help get the rest of users excited about this change.

Management will support the project based on the cost savings realized from supporting one tool, rather than two tools (One of which is reaching “End of Life”).

Are You On Board?

I’ve seen many posts on BOB from IT folks complaining of this change. Some have threatened to leave BusinessObjects and move to another tool. And if that decision makes good business sense, then do it. But don’t change tool sets simply because BusinessObjects, and SAP, have decided to drop Deski.

I grew up on Deski. I have spent hundreds, if not thousands, of hours developing reports in Deski. I become extremely competent with the tool, and even taught the classes for many years. But over the last five years, I’ve used it very little. I’ve focused my attention on Webi, and I’m glad I did. I like Webi very much. And I don’t miss all the quirks and bugs (Remember the Unhandled Exception Error?) in Deski. Yes, I know, Webi isn’t perfect. But it’s a great tool. So, make sure that you’re learning it, and excited about it, before you begin your conversion project.


The bottom line is that Deski is now retired. Honestly, I’m happy about that. As much as I loved that tool, it’s much easier to focus on one less tool in the tool belt. So, I hope this article has given you some useful tips converting those Desktop Intelligence documents to Web Intelligence.

What is your experience? Do you have any additional tips you could share?

Web Intelligence Performance Tips

The recommendations in this document are intended to help improve the general performance of BusinessObjects Web Intelligence documents. To that end, we’ll begin with an overview of Web Intelligence work flows that affect document performance.

Refreshing a Document

When a Web Intelligence document is refreshed, either manually or by the Job Server, the queries in that document are run sequentially. The first query to run is the query on the first tab in the Query Panel. Then the second query runs, and so on. The tabs can be reordered, if needed, to reorder the run sequence.

Once the query results are retrieved and stored in the document, Web Intelligence begins the process of combining results, where necessary, and calculating any variables or formulas in the document. Finally, the reports are rendered with the results.

Documents with many variables and/or many rows of data, can take a substantial amount of time to perform the needed calculations. Therefore, it is recommended to push as many calculations as possible to the universe. This allows the data to be calculated by the database, rather than the report server.

Queries that span multiple contexts will generate multiple SQL statements. How these statements are run, and combined, will depend on the JOIN_BY_SQL parameter in the universe. By default, Web Intelligence will generate a separate SQL statement for each context, as well as each measure from a different table. It will then run those SQL statement in parallel, and merge the results after the data has been retrieved. If the JOIN_BY_SQL parameter is set to Yes, then Web Intelligence will generate a single SQL query, with multiple Selects, and a FULL OUTER JOIN between each Select. The results will then be combined by the database, and the final results will be returned to the document. This can result in a much smaller and more efficient dataset being returned to the document.

However, make sure that queries do not contain objects that are not required for the reports. This is especially true of measure objects, since these typically come from one context only. Removing one unneeded measure can remove one SQL statement from the database.

Rendering a Document

Once the query, or queries, complete, the document will merge the results of multiple queries, then perform any and all needed calculations before rendering the results. When a user goes from one report tab to another, the new tab will run it’s calculations again, and then, once complete, display that tab for the user.

Reports that contain many calculations, in the form of formulas or variables, may render very slowly, depending on the volume of data contained in the report. For example, a report that contains 10,000 rows of data, and 10 columns of formulas or variables, will need to perform 100,000 calculations before rendering the results.

Therefore, significant performance improvements can be gained by reducing the number of rows contained on the report, as well as reducing the number of calculations needed. As mentioned in the previous section, it is much better to do calculations in the universe, than in the document. If the calculations are done in the universe, then the database performs the calculations once, and delivers them to the document. However, if the calculations are done in the report, they will happen when the query is completed, as well as when a user clicks on a different tab. Of course, documents will always have calculations in them, but we want to reduce them as much as possible, especially when there is much data in the report.

Data Reduction

As mentioned, one of the most common reasons for poor report performance is too much data in the report. Often, the reason for too much data is the need for multiple levels of data for drilling. Each level of a hierarchy that is added to a document causes the result set to grow exponentially, along with the document size. However, when data sets are large, alternate techniques need to be employed to reduce the amount of data in the document.

Using OpenDocument: One such technique involves creating a high level document, with only the data needed to display the top level before drilling. Then, using the OpenDocument function, we can create a URL that, when clicked on, will drill into a second document, passing values into prompts in that document. For example, if the user clicks on a Region to drill into that region, the Region name, or ID, is passed to the second document, which is then refreshed for that Region only. This technique keeps the first document small and agile, and keeps the second document small as well. Note that OpenDocument can be used in conjunction with standard drilling. A document can be set up to drill one or two levels within the document, then further drilling can be enabled with the OpenDocument function.

Using Query Drill: Another technique, which is similar to the OpenDocument technique, is Query Drill. The difference is that Query Drill operates entirely within one document. The document is built with only high level data. Then, when a user clicks on a value to drill into it, the query is rerun, with an additional query filter added, which will filter the results to those results needed for the drill. For example, if a user clicks on the West Region, the query is rerun, with a new query filter, where Region = ‘West’. This technique also keeps the document size much smaller.


There are many factors involved in improving the performance of a web Intelligence document. The techniques presented here are only a few. There may be things that can be improved in the universe, or at the database, to improve performance. It’s important to optimize your reporting in any and all places available.

Jonathan Brown from SAP, has also written an excellent paper on Web Intelligence performance here.

Display Input Control Values on Report

A common requirement in the reporting world is to display filtered values on a report. This could include Query Filters (including prompted values), Report Filters (including Drill Filters), or Block Filters. Of course, displaying Query Filters on the report is fairly simple. If the Query has one or more prompts, you would use the UserResponse() function to display the selected value(s) on the report. Otherwise, you could use a formula like this:


Or, of course, you could simply type the values into a cell. The same goes for Block Filters. Since they aren’t dynamic, you can simply type them into a cell.

However, there is no function to display the values selected in an Input Control. This question was recently posted on BOB: “How do I display the values selected in an Input Control on the report?” The solution was fairly simple, but sometimes the simple solutions can be the hardest to find.

To demonstrate this, I’ll use the eFashion universe. I started by creating the following query:

After running the query, I removed the Store name column from the block. This column will be used for the Input Control.

Next, I created an Input Control as follows:

  • Object: Store name
  • Type: Check boxes (Multiple)
  • Number of lines: 14

For all other settings, use the defaults. Now, as you select one or more Stores from the Input Control, the values in the block adjust accordingly. Of course, we can’t see on the report which value(s) are selected. So here’s how we accomplish this:

From the Templates tab of the Report Manager, expand Tables, and drag a Vertical table on to the report. This gives you a blank table, with one column. From the Data tab of the Report Manager, drag the Store name object into the center of one of the cells in the table. At this point, the table should be showing all Stores.

Next, click the Edit button on the Input Control.

In the Edit Input Control box, go to the Dependencies tab. You see Block 1 is selected. Select Block 2 as well.

Click OK. Now, try it out. As you select values in the Input Control, Block 1 continues to show the filtered data, while Block 2 shows the selected Stores.

There you go. Now all you have to do is format as needed, and you’re done. Let me know what you think.

Does Drilling Need to be Painful?

I never know where the inspiration might come from for a blog post. This one just dropped in my lap, so to speak. A co-worker had some questions about drilling in Web Intelligence, and what impact it would have on the database. Well, the answer was a little more complex than I expected. So that got me wondering if other people might be confused about the various drilling options. So this is all about drilling in Web Intelligence, and when to use which options.


Lets start by looking at the drilling options in Web Intelligence, and what they mean. Since drilling works the same in both Rich Client and Web Client, I’ll be working in Rich Client for this discussion, but will mention the difference in work flows, when appropriate.

In Rich Client, when you go to Tools – Options, you will find that the Options dialog box has a Drilling tab. In the Web Client, you will go to InfoView preferences – Web Intelligence to find these options.

The first option allows you to decide what happens when you click the Drill button. By default, when you go into Drill mode, the current report will be placed into drill mode. However, if you select the other option, “Start drill on duplicate report”, then entering drill mode will cause the current report to be duplicated, with the duplicate report being placed into drill mode. In this method, the original report remains in Edit mode, rather than Drill mode.

Next, we have an option that says, “Prompt if drill requires additional data”. When drilling on a document, ideally, the data needed for drilling should already be in the document. But this isn’t always the case. You may need to drill beyond what is in the document. This is called drilling out of scope. More on this later. However, this option allows you to be prompted if your drill action will require the query to be run. Think of it as a warning system, letting you know that drilling may take longer than expected. If you choose this option, you will get a dialogue box, when drilling out of scope, that allows you to select all the needed levels for additional drilling.

“Synchronize drill on report blocks” is an option that should be turned on by default, in my humble opinion. If you have two blocks on a report, such as a table and chart, that show the same data, this option allows you to drill down on both blocks with one click. If you don’t turn this option on, then one block will drill, and the other will be filtered for the drilled value.

If you’ve done any drilling in Web Intelligence, you’re probably aware of the Drill Filter Toolbar that appears above the report when drilling.

When you drill, combo boxes appear in this toolbar that allow you to drill sideways by selecting a different value for the drilled dimension. As the text in the toolbar indicates, you can also drag dimensions onto this toolbar from the Data tab of the Report Manager, and use them as simple report filters. The final option, “Hide Report Filter toolbar on startup”, will prevent this toolbar from appearing when you start drill mode.

Note that any changes you make to these options will take effect the next time you open a document.

Setting up Drill

In order to drill in a Web Intelligence document, Hierarchies (Drill Paths, in 4.x) need to be created in the universe, by the universe designer. This isn’t covered in this post. We’ll assume that the designer has already done this.

When building a query, we typically set up the drill path by defining the Scope of Analysis. This simply means selecting which objects from a Hierarchy we wish to use for the drill path. To do this, start by exposing the Scope of Analysis pane (Click the 3rd button on the toolbar). Then switch the Universe pane from “Display by objects” to “Display by hierarchies”.

Note that it is not necessary to Display by hierarchies. This view is helpful to see the hierarchies that are built in the universe. However, if you are familiar with the hierarchies, you can leave “Display by objects” selected.

You can now drag the objects from the Universe pane to the Scope of Analysis pane that you wish to use for drilling. These objects will be included in the query results, but will not be shown on the report until you start drilling to them. You don’t need to select all the objects from a hierarchy, only the ones you need for drilling.

In the query above, let’s say that we selected Year, Quarter and Month for the Scope of Analysis. On the report, we can drill from Year to Quarter, and Quarter to Month, using the query results stored in the document. However, if we want to drill from Month to Week, we will be drilling beyond the Scope of Analysis defined in the query. This is called drilling out of scope, and requires the query to be rerun. Drilling out of scope is generally not desirable, as it adds additional overhead to the database, and takes longer to drill. Therefore, it’s best to define the needed levels in the Scope of Analysis.

However, it’s also not good to define too many levels in the Scope of Analysis, as each additional level increases the size of the result set stored in the document, exponentially. This can cause the document to open slowly, import and export slowly, and perform sluggishly, especially when refreshing the query(ies). The bottom line is that you want to find the balance between too many levels and too few levels.

Query Drill

As mentioned, for each additional level added to the Scope of Analysis, the size of the result set grows exponentially. This can cause documents to bloat to huge sizes. Query Drill is designed to alleviate this problem. However, it can also be a dangerous feature. Here’s how it works.

Rather than defining a Scope of Analysis, you would simply create your query as usual, and run the query. Once you have the query run, open the Document Properties (File – Properties). Under Document options, you will see a check box labeled “Use query drill”. Click the check box to activate Query drill.

Now, when you drill, for example, on 2003 in the Year column, Web Intelligence will rerun the query, adding a query filter for Year=2003. If you then drill on Q4, the query will be run again, with another query filter added for Quarter=Q4. This keeps the result set nice and small, but allows drilling to all levels needed.

So why is this a dangerous feature? First of all, every time a user drills down, up, or sideways, the database is hit with another query. This can add tremendous overhead to the database, especially if there are many users using this feature.

Second, depending on how long it takes for the query to run, users will have to wait for the results of their drilling action. This can cause users to become annoyed with drilling, and BusinessObjects in general. Therefore, Query Drill should be used with extreme caution.


Yes, there’s some administration to be done with drilling. So, if you’re not an administrator, you can skip this section, although you may find it interesting.

There are some security settings that can be used to control the way users use drill. In the CMC, under Applications – Web Intelligence, there are some advanced settings that apply to drilling. The first one is “Enable drill mode”. With this setting, you can completely eliminate a user’s ability to drill. If you don’t want certain users to use drilling, use this setting to deny drilling.

“Extend scope of analysis” allows a person to drill out of scope. Since drilling out of scope adds overhead to the database, you can limit users ability to drill out of scope by denying this right.

So, with these two settings, you can have some control over drilling. Use these cautiously, as drilling is one of the most useful features of Web Intelligence.


The questions from my co-worker had to do with the impact of drilling on the database. Of course, my answer was along the lines of “It depends”. There are numerous factors that can affect the impact on the database. Hopefully, this article gives you the information you need to make an educated decision regarding the options for drilling. Let me know you have any special tricks or guidelines for drilling.

Adding a Linear Trend Line to a Chart

I must admit, I have long lamented that Web Intelligence doesn’t have built in trend lines for charts. Of course, that doesn’t mean you can have one. You just have to build it manually, and that can be a lot of work. So this post is about how to do a linear trend line, also known as a Linear Regression. This technique makes extensive use of Calculation Contexts.

We’ll use the eFashion universe for this example, so you can follow along with all the steps. Start by creating a query with Store name, Year, Month name, and Sales revenue. Run the query, and remove Store name and Year from the block, and put a custom sort on Month name to get the months in chronological order. Finally, add report filters for Store name and Year to the Report Filter toolbar. Your report should now look like this.

OK, let’s get started. We’ll need to create quite a few variables along the way.

1. Create a variable called “n”. This will simply calculate the number of values in the block. So the formula is as follows:

=Count([Month Name])In Block

Add this new variable to the block, and verify that it shows 12 on every row.

2. The next variable simply numbers the rows in the block, so we’ll call it “x” as it numbers the values along the X axis of the chart. The formula for this one is:

=Sum(RunningCount([Month Name]))

If you add [Line Num] to the block, it should number the rows, 1 – 12.

3. Now we get into the heavy calculations. Call the next variable “a” with the following formula:

=(Sum([x] * [Sales revenue]) In Block * [n] In Block) – (Sum([x]) In Block * Sum([Sales revenue]) In Block)

You can add the [a] variable to the block, just to verify that you get the same number on each row. Notice the “In Block” context used several times in this formula. In Block is an output context operator that tells the formula to output one number for the entire block. This is why we get the same value on each row.

4. Moving along in order, let’s call the next variable “b”, with the following formula:

=([n] * [n] * (Sum([x] * [x]) In Block / [n] – (Average([x]) In Block * Average([x]) In Block)))

The first part of this formula multiplies the squares of [n] and [x], outputing one number for the block, dividing it by the number of values in the block. It divides that by the number of values minus the square of the average [x] values. Sound confusing? Trust me, we’re on the right path, but we’re not done yet.

5. OK, this brings us to the next variable, which we’ll call “c”. This one is pretty simple:


Of course, since both [a] and [b] return one value for all rows, we can expect [c] to return one value for all rows.

6. We’re in the home stretch now. Add another variable, called “d”, with the following formula:

=(Average([Sales revenue]) – [c] * Average([x])) In Block

Again, this one should return one value for all rows. Any time you see “In Block” as the output context for a block, you can expect it to return one value for all rows.

7. OK, one final variable! We’ll call this last variable “Trend”. This one has the following formula:

=[d] + [c] * [x]

That’s all the variables we need. We are now ready to plot this on a chart. So, remove everything from the block except the Month name, Sales revenue, and Trend objects. Finally, turn the block into a line chart, and format as desired. You should end up with something like this:

As you select different years, or stores, from the filters, above, the trend line will adjust accordingly.

Give it a shot with your charts and let me know what you think?

Removing the Confusion from Calculation Contexts

If you’ve read enough of my blog posts, you’re probably aware that I’ve been using BusinessObjects for a long time, and, during the early years, I was pretty clueless. One of the topics that had me completely stumped was Calculation Contexts. I attended the classes, and learned a lot, but could not grasp the concept of Calculation Contexts. So, I studied the training manuals, but it just wasn’t sinking in. So, I studied the user guides, but it still wasn’t making sense. So, I decided that I just wasn’t smart enough to understand this concept, and did my best to live without it.

It wasn’t until 2002, while attending the BusinessObjects User Conference that I finally got it. I was attending a session by my good friend Dave Rathbun. One of his topics was Calculation Contexts. Dave explained them in a slightly different way than the books. Suddenly the light bulb went on, and it hasn’t dimmed since. In 2004, I started teaching BusinessObjects classes, and I always taught Calculation Contexts the way I learned it, and it seemed to make more sense than what was in the books. So, my first piece of advice is, if you really want to understand Calculation Contexts, don’t read the books. 🙂

What is Calculation Context?

The bottom line is that all calculations in Web Intelligence, by default, happen in the context of where they appear on a report. The same formula, in different places, can give different results. Often, the default context is exactly what we want. But sometimes it isn’t, and it’s then that we need to understand how to manipulate the contexts.

Default Calculation Context

Let’s start with a simple example, using the infamous eFashion universe. Create and run the following query:

Next, section the report on Store name. Add a Sum to the Sales revenue column. Notice that, even though the formula for the Sum is the same in every section, it yields different results in each section:

=Sum([Sales revenue])

This is because the calculation takes place in the context of where it appears on the report. In this case, the formula is within a section, so it outputs a different value for each section.

Next, drag the cell with the sum, out of the block, and drop it next to the section header cell.

Notice that the results are the same as when the sum was in the block. This is because the formula is still in the context of the section, Store name, and therefore, outputs a different value for each Store name.

Next, drag the cell with the sum up and to the left, next to the Report Title cell, but hold down the Ctrl key before you drop it. This creates a copy of the cell.

This time, after dropping the cell, you will see a different number. The number you see now is the grand total for the whole report. Once you move the formula outside the context of a section, it returns results in the context of the whole report.

But what if you want to calculate the percentage that each store makes up of the grand total? You would need the following calculation: (Revenue for each store) / (Grand Total Revenue)

But how do you get the grand total to appear in the section? Well, before we answer that question, let’s talk about Calculation Contexts, and what makes them up.

Components of Calculation Contexts

Every calculation in Web Intelligence has an Input and an Output context.

  • Input Context: Determines what values are taken into the calculation. Think of it as, “I need to take into the calculation, one number for every __________?
  • Output Context: Determines the output level, or dimensions, by which the calculation is output. Think of it as, “I need this formula to put out one number for every __________?

The syntax for Calculation Context is as follows:

=Aggregate Function([Measure] Input Context ) Output Context

Notice that the Input Context is inside the aggregate function, and the Output Context is outside the aggregate function. If you don’t specify an Input or Output Context, Web Intelligence will use the defaults. More on defaults later. By the way, while Calculation Contexts are most often used with aggregate functions, they are sometimes used without functions.

How to use Calculation Contexts

Let’s try a couple examples to demonstrate how these contexts work. Add another report to the document that we have already started, and place a block on it with Store name and Sales revenue.

Select the Sales revenue column, and add an Average calculation. In the footer of the block, you should see the following:

Next, we’re going to put an Alerter on the Sales revenue column, to highlight all values that fall below average. In order to reference the average of 2,799,016 from an Alerter, we will need to turn the Average into a Variable. So, select the cell that has the Average value in it, and click the Create Variable button on the Formula Toolbar.

Give the Variable the name, Average. Now we can create the Alerter. Select the Sales revenue column, and then click the Alerters button on the Reporting Toolbar.

In the Alerters box, click the New button, to create a new Alerter. Name the Alerter, Below Average. In the Sub-Alerter section, set the following conditions and formats:

Click OK, then OK in the Alerters box. At this point, the Alerter is applied to the Sales revenue column. But we don’t see any values highlighted in bold red. Why not? Well, it’s all about Calculations Contexts.

To see why this isn’t working, add the Average variable to the block, as the third column.

The value that we would expect to see, 2,799,016, isn’t showing up. Instead we’re seeing the same values as we see in the Sales revenue column. So let’s explore why.

As I mentioned, if we don’t specify an Input or Output Context, Web Intelligence will use the default contexts. And in this case, we didn’t specify any contexts in the Average variable. So what is the default? Well, in the body of a block, the default Input Context is the dimension(s) displayed in the block. And, the default Output Context is the dimension(s) displayed in the block.

If you were going to calculate the average manually, you would need to know what the numbers are for each store, sum them up, then divide by the count of stores. In other words, you would need to take into the calculation one number for each store, and you would want to output one number for the whole block of stores.

Well, that’s exactly what we need the Calculation Contexts to do. Since the default Input Context is the dimension(s) displayed in the block, and, in our case, the dimension displayed in the block is Store name, the default Input Context is to take in one value for each Store name. So, the default Input Context is fine.

So the problem must be the Output Context. Again, the default Output Context is the dimension(s) displayed in the block. In our case, therefore, the default Output Context is to output one number for each Store name.

So, what is happening here is that we’re taking into the calculation one number for each Store, and outputting one number for each Store. Clearly, that isn’t very useful. What we really want is to take in one number for each Store (the default), and output one number for the entire block (not the default). So, we need to specify the correct Output context.

So, let’s open up the Average variable, so we can modify the Output Context. Click at the end of the formula, outside the aggregate function, and enter the following:

In Block

So the entire formula should now be:

=Average([Sales revenue])In Block

The “In Block” part of the formula tells the formula to output one number for the entire block. Click OK, and confirm that you wish to modify the variable. Now, not only is the Average variable giving us the same value on every row, but the Alerter has now started working.

OK, so now that we have this working correctly, we can remove the Average column from the block. Our next step is to add another variable to the block. This time, we want to display the most revenue earned by each store in any one year. So, create another measure variable, called Maximum, with the following formula:

=Max([Sales revenue])

Add this new variable to the block as the third column. Does it look familiar? Just like the Average variable, Maximum is just repeating the Sales revenue values. This can’t possibly be correct! You’re right. It’s not correct. So let’s figure out why.

Since we didn’t specify an Input or Output Context, the formula is using the defaults. Therefore, it is taking in one number for each Store name, and outputting one number for each Store name. What do we want it to do? Well, in order to know the most revenue each Store earned in any one Year, we need to know what the values are for each Store, for each Year. In other words, we need to take into the calculation both Store name and Year, not just Store name. And, we want to output one number for each Store name, which is the default, so we don’t need to specify an Output Context.

So, open up the Maximum variable, and click just inside the closing parenthesis, so we can specify the Input Context. Add the following:

In([Store name];[Year])

So, the whole formula should now look like this:

=Max([Sales revenue]In([Store name];[Year]))

Notice that this time, we put parenthesis around the objects used in the context. When you use a keyword, such as Block or Report as the Context, you don’t put parenthesis around it. But when you use objects, like Store name or Year, you always put parenthesis around them.

Click OK, and confirm that you want to modify this variable. Now, you see very different values.

Are these numbers correct? Well, if you want to validate the results, add another block to the right with Store name, Year, and Sales revenue, and see if the Maximum variable is showing the highest annual revenue for each Store. You will find that, yes, the results are correct.

Now, let’s take a brief look at the first report that we sectioned on Store name. If we want to display the percentage that each Store makes up of the total, we would use the following formula within the section:

=Sum([Sales revenue])/Sum([Sales revenue])In Report

The second part of the formula, with “In Report” as the Output Context, specifies that we want it to output one number for the whole report. Therefore, we are dividing the Sales revenue for each section, by the Sales revenue for the whole report. Format the cell as a percentage, and you’re done.

Using Other Context Keywords

So far, we have explored the Input and Output context using the “In” keyword. But there are more than just one keyword that can be used in Contexts. Let’s take a quick look at ForEach and ForAll.

ForEach can be used to add dimensions to the default context. For example, in the Maximum variable we just created, we specified both Store name and Year for the Input Context. If we had used “ForEach” instead of “In”, we would not have needed to specify Store name in the Context, as Store name is the default. Test this by creating another variable, called Max ForEach. Enter the following formula:

=Max([Sales revenue]ForEach([Year]))

Go back to the second report in our document, and add the new variable to the block as the fourth column. Notice that it gives the same results as Maximum. Use ForEach when you need the formula to be dynamic. If you add additional dimensions to the block, they will automatically be added to the Input Context.

ForAll is the opposite of ForEach. While ForEach adds dimensions to the defaults context, ForAll removes dimensions from the default Context.

OK, so let’s take a quick look at one more Context keyword: Where

The Where keyword allows you to add a filter to a formula. For example, if we wanted to split the revenue for each year into separate columns, we could create a variable for each year, using the Where keyword. Let’s try one.

Create a variable called Revenue 2001 as a measure, with the following formula:

=Sum([Sales revenue])Where([Year]=”2001″)

Add this variable to the existing block. In this way, you can easily filter a formula for one or more values of a dimension. Of course, this could also allow you to calculate the variance between two years. For example, if you wanted the variance between the 2002 revenue and the 2001 revenue, you would use the following formula:

=Sum([Sales revenue])Where([Year]=”2002″) – Sum([Sales revenue])Where([Year]=”2001″)

There are a couple rules when using the Where keyword:

  • Following the Where keyword, always use parenthesis for the condition.
  • In the condition, the value to the right of the equal sign must always be a constant, not a variable.


This post turned out to be much longer than I expected when I started it. I guess I got carried away. But I think that it’s very important to understand Calculation Contexts. Just remember, the Input Context says, “I need to take into the calculation, one number for every what?” And the Output Context says, “I want to put out one number for every what?” Of course, the more you practice it, the easier it will get over time. So, practice much, and let me know how it goes.