Impact of a Database Migration

For the past few years I’ve been involved in quite a few “forklift” projects. What is a forklift project? I’m glad you asked. When a company decides to move their data warehouse from one database platform to another by simply moving, or copying, all the existing tables into the new database, this is called a Forklift. The tables are simply lifted from one database, moved to the new database, and set down there. The data model isn’t changed. That’s a whole other project. But often times, some things change in a forklift project. Data types may be different between two different databases. Or one database may have different reserved names than another, causing table names, or columns names, to be changed. But, for the most part, everything stays the same.

My role in the project is to convert the BusinessObjects universe(s) and reports to the new database platform. I’ve done this enough times, that I’ve developed some standard templates to help me make sure that I don’t miss anything. I don’t want to get to UAT (User Acceptance Testing) and find out something doesn’t work that I forgot to check.

So, what happens in a Forklift project? Well, from a BusinessObjects perspective, we would prefer that the database structure be complete before we start development. This includes the data access layer, if needed. It would be nice to have some data in the tables during development, but we can get started without data.


Sometimes we like to treat documentation like it’s a bad word. After all. it’s so much faster to just do the work, and skip the documentation. But trust me on this: In the long run, documentation will be absolutely necessary. You need to know where you were, and what you’ve done, in case you ever need to revert, or fix, anything in a universe or report.

I always start with a workbook, with a spreadsheet for each part of the universe. I create one workbook for each universe that will be converted. I have a template that I use for each project. The workbook has eight sheets in it, as follows:

  • Parameters: It’s not uncommon to change universe parameters, in order to optimize the universe for the new database. On this tab, we document each parameter that is changed.
  • Tables: Most of the time, the database/owner name will change for each table. And sometimes, table names will change. On this tab, we document the old database/table names, and the new database/table names.
  • Derived Tables: The SQL in each Derived Table will need to be converted from the old database to the new database. On this tab, we document the old SQL and the new SQL. Note that, for very large, or complex, Derived Tables, we may have to use a Text application, such as Microsoft Word, to document the old and new SQL.
  • Objects: On this tab, we document the old and new SQL for each object that needs to have the SQL changed. we don’t document objects that don’t need to be changed.
  • Joins: Any joins that have database functions may need to be modified to work with the new database. On this tab, we document the old and new SQL only for the joins that are changed.
  • Parameters: This tab is only used with UNX universes, to document any changes that need to be made to parameters, if any.
  • Lists of Values: If there are Lists of Values (LOVs) with custom SQL in them, they may also need to be converted. On this tab, I only document the LOVs that have been modified.
  • Other: This is just a tab to document any changes that don’t fit into the other tabs. It might include things like connection and/universe name changes.

I can’t emphasize enough the important of documentation. When you start testing reports, if the values don’t match from the old universe to the new universe, you may need to re-convert the SQL from old to new. Having this documentation will make it easier to check your work. Also, once you have converted snippets of code, you may be able to re-use that with the next universe, without having to convert it again.

Converting the Universe

OK, so you have the documentation ready, so now it’s time to convert the first universe. It would be best if you can freeze development on the universe at this time, so you’re not trying to convert a moving target. A very simple universe can be converted within an hour. A very complex universe may take a week or two to convert. Be prepared, and set expectations accordingly.

I also like to set up a temporary sandbox environment where all universes and reports will be converted. If this isn’t possible, it can also be done in the Development environment. Before you start the steps, below, make sure you create a connection to the new database, and make sure it works. Make sure that the account that you are using to connect to the database, has read permission for all the database objects that will be needed for the universe.

Following are the steps that I use to convert a universe:

  1. In the Sandbox/Development environment, create a copy of the universe. Give it a new name, that reflects the target database.
  2. Open the new universe in Universe Design Tool (UDT) or Information Design Tool (IDT), depending on the type of universe.
  3. Change the connection for this universe to the connection for the new database. Make sure that you test the connection.
  4. Change any parameters that need to be changed, in order to optimize the universe for the new database. Document your changes.
  5. For each table in the universe, change the owner and/or table name, as needed.
  6. Go through each Derived Table, and update the SQL for the new database. If needed, get help from a DBA, so the SQL is written for optimal performance for the database.
  7. Run an integrity check, parsing objects, joins, and conditions. This will tell you what objects may need to be updated with new SQL.
  8. Go through each object, join, and condition, that produced an error in the integrity check. Update the SQL, as needed. Again, make sure that the SQL you write is optimized for the database.
  9. Look for LOVs that might have custom SQL, and modify as needed. If an LOV is market to “Export with universe”, it may have custom SQL.
  10. In a UNX universe, look for parameters that may need to be modified, and fix, as needed.
  11. Save and export the universe.

Reports that are associated with this universe will need to be converted from the old universe to this new universe. Once you have done that, test the reports to make sure they run successfully. If they throw errors, work to find out why, and fix those issues. It’s a good idea to test the same reports against the old and new universes. The data may not match at this point, depending on what database you are connected to (Development or Production), and the status of loading data into the new tables. I’ve had times when I tested a report against the new universe, only to get errors, then later found out that the same report got the same errors against the old universe. In that case, just leave it alone.

Converting Reports

Do you have reports with custom SQL in the queries? If so, you will need to convert those queries manually to the new database. Just like you did with the Derived Tables in the universe, you will want to get help from a DBA to convert the SQL, making sure that it’s optimized for the database. Large, complex SQL queries may be tricky to convert, so take your time, and make sure that it is converted correctly. If the query is very complex, I like to break it down into small pieces, and convert each piece, one at a time.


Once you have all the universes and reports converted to the new database, you’re ready to start User Acceptance Testing (UAT). The primary goal of UAT is to make sure that reports return the same results against the old and new databases. If the results are different, don’t assume that the problem is with the universe. Make sure that you check the database, as well, making sure that the data matches between the old and new databases. If it does, then check the universe, and make sure everything used for that report was converted correctly. Make sure that you update your documentation each time you make a correction in the universe.

You’re all done. It’s time to relax in the sun with your favorite drink. That is, until the next project comes along.


Differences Between Desktop Intelligence and Web Intelligence

What are all the differences between Desktop Intelligence (Deski) and Web Intelligence (Webi)? The question has been asked so many times, including from clients, that I decided to attempt to put together a comprehensive list of the differences. Why is such a list important? Well, with the release of SAP BusinessObjects 4.0 in September 2011, Desktop Intelligence is no longer available, and is now completely replaced by Web Intelligence. So, for organizations that use Desktop Intelligence, it’s important to know what those differences are, as they transition from Deski to Webi.

The differences can be divided into two categories:

  • Deski features that are missing from Webi
  • Webi features that never existed in Deski

Fortunately, that first category is getting shorter with each new release of Webi. However, since that category is most important to those migrating from Deski to Webi, we’ll cover that category first.

Deski Features that are Missing from Webi

  • Display a table in multiple Columns: In Deski, you can set a table so that, when it reaches the bottom of a page, it “wraps” to the next column on the same page, similar to a how a newspaper column continues on the same page.
  • Grouping button: Deski has the ability to create grouped variables with the click of a button. In Webi, we have to write the If – Then – Else logic manually to create such functionality. This can be quite tedious. I have submitted an idea to SAP to have this functionality added to Webi. You can vote for this idea here.
  • Freehand SQL queries: I’ve never been a big fan of this, as it defeats the whole purpose of having universes, and also creates maintenance nightmares. But there is some value for it in a test environment. This idea has been submitted to SAP here, and it is currently under review.
  • Save as RTF, TXT, HTML: Currently, Webi can save as Excel, PDF, or Webi only.
  • Edit List of values: In Deski, users could edit a List of Values for a universe object, so that it would only display the values that are valuable to them.
  • List of values in formula editor: In Deski, when creating a formula, you can select values for a particular object, from a list, provided in the formula editor. This idea has been accepted by SAP, and should appear in a future version of Webi.
  • Center on page: Deski has a Page Layout option to center a block, or cell, on the page, between the margins.
  • VBA Macros: Deski support Visual Basic macros. I don’t know if Webi ever will support this, but I suspect it will support some kind of programming language in the future.
  • Delete multiple tabs simultaneously: This seems like such a simple thing, but Webi still doesn’t have this capability.
  • Export multiple documents simultaneously: That’s right. In Webi, we can only export documents to the repository one at a time.
  • Create a variable in it’s current context: In Deski, we could create a variable from a formula, and lock in the current context of the formula, even if it was not explicitly included in the formula.
  • Support for all installed fonts: Webi uses it’s own font library, not the Windows fonts. So, unlike Deski, we can’t add additional fonts to Webi. An idea has been submitted to this here.
  • Schedule to a printer: I realize that printing is becoming more and more obsolete, but it isn’t there yet. Deski reports could be scheduled to print automatically. Webi reports cannot.
  • Automatic custom sort for month names and weekday names: Deski has built in custom sorts for Months and Weekdays.

So there you have it. As far as I know, that’s a complete list of features that Webi is still missing from Deski. If you know of more, please mention them in the comments below.

Webi Features that never existed in Deski

  • Track Data Changes: Allows you to see the difference between two different refreshes of a document, and compare the results.
  • Hyperlinking Wizard: You can create an OpenDocument link in about 5 minutes, as opposed to the days or weeks spent doing this in Deski.
  • Most built in formulas: Webi has more built-in formulas than Deski. These formulas come in the way of standalone cells that you can simply drag onto your report.
  • Save as Excel with 65K+ rows: You can save a Webi document to the Excel 2007 format, which allows for over 1 million rows in a single worksheet.
  • Add temporary values to custom sort: When creating a custom sort, you can add values that are missing, to the sort, so that, when they eventually show up on the report, they will sort in the correct order.
  • Box chart: A new chart in Webi that allows you to see a measure at five different data points across a dimension.
  • Tag cloud chart: A new chart in Webi that allows you to easily compare two measures against a single dimension.
  • Heat map chart: A new chart in Webi. It’s the equivalent of a crosstab table, but in chart format.
  • Bubble chart: A new chart in Webi that compares two measures against a single dimension, in an easy to read format.
  • Optional Prompts: Prompts created in the query can be marked as optional, allowing users to ignore the prompts, if they wish.
  • Display query summary on report: Webi has a QuerySummary function that allows you to easily display information about the query on the report.
  • Inner join between queries: Webi’s Merge Dimension feature allows you to choose how the data is displayed on the report. It doesn’t have to be a full outer join, like in Deski,
  • Query Drill: Allows you to keep your data set small, while drilling through large amounts of data.
  • Drill in InfoView: Webi document, of course, can enter drill mode in InfoView (BI Launchpad). Deski documents cannot.
  • Edit in InfoView: Webi documents can be easily edited in InfoView (BI Launchpad), by anyone who has permission.
  • Alternate row shading: We used to have to do some slight-of-hand with variables to get this to work in Deski. But Webi has it built in.
  • Input Controls: This allows for user interaction in the reports, making What-if analysis quite simple. It also make it easy to build powerful dashboards in Webi.
  • Advanced formatting rules: That’s the new name for Alerters. Webi’s Formatting Rules editor allows for more powerful options, including using formulas in the rule.
  • Publish as a BI Service: Individual blocks in Webi can be published as a Web Service that can be consumed by other BI applications.
  • Compatible with Live Office: Report parts form Webi documents can be consumed by Live Office, allowing integration with MS Office.
  • Compatible with BI Widgets: Webi report parts can be consumed by BI Widgets, allowing them to be dropped onto the desktop as standalone objects.
  • BEx queries: Webi can run queries directly against BEx queries.
  • Query stripping: Allows for the automatic removal of objects from a query against an OLAP cube, that do not directly contribute to the report(s).
  • Synchronize drill on report blocks: When drilling on one report block, other blocks on the same report can drill at the same time, staying in sync.

As far as I know, that’s everything that Webi offers over and above what Deski had. If you know of any others, feel free to mention them in the comments, below.


So there you have it. By my count, Webi wins, 24 to 14. While there are still some features missing from Webi, there aren’t that many, and hopefully, none of them will be show stoppers for you.

Impressions of Web Intelligence 4.0

SAP BusinessObjects Web Intelligence 4.0 has been Generally Available since September, 2011, and has generated a significant amount of buzz in the BusinessObjects community. I have no plans to create more buzz. My only desire here is to give an honest assessment of the new version. And by honest, I mean I will share what I have seen, both good, and bad.

Ribbons are for Packages

With this version, SAP has gone the way of Microsoft, replacing toolbars with ribbons. I’ve been using the ribbons in MS Office for several years now, and I still prefer the old menus and toolbars. The same is true for Webi. I don’t like the ribbons, for several reasons:

  • I have to remember which ribbon to select, for the feature that I need. If I don’t remember, then I end up hunting through ribbons. This wastes time. I don’t like to waste time. I can easily remember which ribbon to select for the commonly used features. But for the less used ones, I rarely remember which ribbon to choose.
  • Operations that used to take one click (simply click the button) now take two clicks. Click the appropriate ribbon, then click the appropriate button.

I’m sorry, but ribbons don’t make me more productive. They only slow me down.

Charts and More Charts

Web Intelligence now shares a common chart library with other tools. That was a smart decision. It creates a common look and feel across the various tools. But it also includes a host of new charts that we’ve never had before. Here are a few examples.

Box Chart:

The box chart shows the following five different values for a pair of dimensions, with a measure:

  • Minimum
  • Maximum
  • First quartile
  • Third quartile
  • Median

Users can mouse over the chart to see the values.

Column chart:

Yes, we’ve always had column charts, but now we can make the columns round, and we have many more powerful options for these charts.

Tag Cloud Chart:

The Tag Cloud chart allows the comparison of two different measure against a dimension.

Heat Map Chart:

The Heat Map chart is the charting equivalent to a crosstab table. It displays the intersection of two different dimensions with a measure.

Bubble Chart:

The Bubble Chart allows for the comparison of two measure against a common dimension.

Some of the new charts looks quite useful. But, in addition to the new charts, we have a host of new features for the charts. There are far too many to mention. But, to name a few, you can base the chart title on a formula. You can also use gradient colors on charts.

Unfortunately, we’ve lost the ability to create custom color palettes. I’m disappointed with that.

Improved Compatibility with MS Excel

We now have the ability to save a Webi document to the new .xlsx format. When we do, we can save over a million rows of data to a single worksheet. Of course, we still have the ability to save in the old format (.xls) as well. In that case, if you have more than 65,636 rows, it gets split into multiple worksheets. So now you have a choice as to which format to choose.

What Happened to Tracking?

I don’t know how this was missed in UAT, but the Data Tracking feature, which was introduced in XI 3.x, is broken in 4.0. Here’s the issue: If you use Data Tracking, the only rows that will be formatted by the Data Tracking feature, are the rows that are also formatted by the Alternate Row Shading feature. Seriously.

So, if you use Data Tracking, here’s the trick: Set your Alternate Row Shading frequency to 1. The Data Tracking works as expected. Yes, SAP is aware of this bug, and plans to issue a fix in SP04, which is currently scheduled to be out in July 2012. It is being tracked on ADAPT01618287.

What’s in a Name?

Some things have been renamed in Webi 4.0. Some of the new names make sense, others do not. The problem, of course, is that, with the new Ribbons, buttons have been moved around, and features have been renamed. This makes it that much more difficult to find what you need.

Alerters have been renamed to Formatting Rules. That makes sense.

The three axis of a 3D chart have also been renamed:

  • X Axis is now called Category Axis (This makes sense)
  • Y Axis is now called Value Axis (This one makes sense, too)
  • Z Axis is now called Region Color (Huh? What were they thinking?)

Since the rest of the world still refers to these as the X, Y and Z axis, I have no idea why SAP decided to go against the grain.

The Data Manager is a Welcome Addition

One of the complaints from people moving from Desktop Intelligence to Web Intelligence has been the lack of a Data Manager. In Deski, we could always see the raw data, and information about the data in the Data Manager. Now Web Intelligence has the same feature. It’s a big improvement. The only thing missing, that Deski had, is the ability to see metadata for multiple past refreshes.

Hide and Go Seek?

For a long time now, I’ve missed some of the hide capabilities in Deski. They’ve now been added to Webi. 🙂

We can now hide a block, based on a formula. If the formula evaluates to true, the block is hidden. Otherwise, the block is shown.

We also now have the ability to hide columns in a block. Simply right-click in a column, and hide it. So simple. In fact, it’s easier than Deski. 🙂

Don’t Have a Fit

Fit to Page has finally been added. That’s another feature that former Deski users have been begging for, and now we have it. Thank you, SAP!


As you can see, not all news is good news with Webi 4.0. But SAP has made significant strides in adding in the missing functionality from Deski. There is still room for improvement, but we’re getting closer. Look for another post detailing the differences between the two products.

If you’ve been using Webi 4.0, what is your opinion?

Migration from Deski to Web Intelligence

As you probably know by now, with the release of SAP BusinessObjects Business Intelligence 4.0 (BI4), Desktop Intelligence (Deski) is officially retired. Of course, we’ve known that this day would come for nearly ten years, and some people started early in migrating their Deski documents to Web Intelligence (Webi). But many also delayed until the last minute, waiting for Webi to mature a little more. But the time for waiting is over, and it’s now time to put Deski behind us. So, to that end, I’d like to share some experience in converting those Deski documents to Webi.

The Right Tool for the Right Job

Most of your Deski documents can be converted to Webi with the help of the Report Conversion Tool. If you haven’t used this tool yet, try it out with one of your Deski Documents, and see how it does. I’ve used it many times, with varying degrees of success. Sometimes it converts complex documents, with no issues, much to my surprise. Sometimes it fails to convert very simple documents, much to my dismay. 😦

So, my first piece of advice is, if it fails to convert a simple document, don’t waste time trying figure out why. Simply rebuild the document, from scratch, in Webi. I hate to admit it, but I learned this lesson the hard way.

I hate failure, and am usually not willing to give up easily. One time, I ran a Deski document through the Report Conversion Tool (RCT), and it failed. I couldn’t figure out why. The document had one query, one table, and one report, with only one simple variable. I spent two hours trying to figure out why it wouldn’t convert successfully. Then it dawned on me. I could rebuild it in Webi in five minutes. It wasn’t worth the time.

So, for simple reports, try it once with the RCT. If it fails, simply rebuild the document.

Did it Work?

The RCT has three potential outcomes for each document:

  • Converted Successfully
  • Partially Converted
  • Not Converted

If the conversion is successful, compare the Deski version with the Webi version. Make sure the formatting is correct. Many times, it will need some touching up. And make sure the data is the same. Not all functions work exactly the same, so you may need to modify the formulas in some variables to get the correct results.

If the document is partially converted, compare the two, and see what didn’t convert. For example, if your Deski document has any macros in it, they won’t be converted, as Webi doesn’t support that feature. You may need to get creative to find another way to get the same results.

If the document is not converted at all, the RCT will tell you why. Then you get to decide if it’s easier to try to solve the problem, or simply rebuild the document.

Freehand SQL

I’ve struggled a long time with this. Best practices tell us that we should never be using Freehand SQL to build reports. It defeats the purpose of using BusinessObjects. It is very difficult to maintain. And it often violates corporate security policy. But that hasn’t stopped people from building these reports. Sometimes it’s just easier to write the SQL by hand, rather than modify the universe to accommodate the report’s requirements. I get that.

So what do we do with these documents? Well, the good news is that you can use the RCT to convert them to Webi documents. However, since Webi doesn’t support Freehand SQL, how does it work? The RCT will convert the document by generating a universe with a derived table in it. The Freehand SQL from the Deski document will become the derived table in the universe. If the document has multiple Freehand SQL data providers, each will become an individual derived table in the same universe.

In fact, one such universe will be generated for all Deski documents that use the same database connection, and have Freehand SQL data providers.

But don’t leave it there. Once converted, start migrating these new Webi documents to real production universes. Simply change the universe for each query in the Webi document from the auto-generated universe, to your production universes. Finally, after you have all the new Webi documents attached to production universes, you can delete the auto-generated universe.

By the way, this also applies to Webi queries that were built with universes, but then the SQL was edited.

Marketing and Political Unrest

One of the most challenging tasks in converting from Deski to Webi is the fear of change coming from the user community. Let’s face it, once you get good at a tool, you don’t want to be forced to change to another tool. So, we need to make this as easy as possible for the users. This starts with two very important jobs:

  • Management Support: You have to have management behind this decision. If they aren’t, you can find yourself fighting political battles, and not always winning.
  • Marketing: You need a campaign to “sell” Webi to the user community. You want them on your side.

Both of these are interconnected. If the user community screams loud enough, management will not support the project. So, get a few key, influential users, who are respected throughout the organization, involved in the project. They will be your marketing evangelists among the rest of the community. They can help get the rest of users excited about this change.

Management will support the project based on the cost savings realized from supporting one tool, rather than two tools (One of which is reaching “End of Life”).

Are You On Board?

I’ve seen many posts on BOB from IT folks complaining of this change. Some have threatened to leave BusinessObjects and move to another tool. And if that decision makes good business sense, then do it. But don’t change tool sets simply because BusinessObjects, and SAP, have decided to drop Deski.

I grew up on Deski. I have spent hundreds, if not thousands, of hours developing reports in Deski. I become extremely competent with the tool, and even taught the classes for many years. But over the last five years, I’ve used it very little. I’ve focused my attention on Webi, and I’m glad I did. I like Webi very much. And I don’t miss all the quirks and bugs (Remember the Unhandled Exception Error?) in Deski. Yes, I know, Webi isn’t perfect. But it’s a great tool. So, make sure that you’re learning it, and excited about it, before you begin your conversion project.


The bottom line is that Deski is now retired. Honestly, I’m happy about that. As much as I loved that tool, it’s much easier to focus on one less tool in the tool belt. So, I hope this article has given you some useful tips converting those Desktop Intelligence documents to Web Intelligence.

What is your experience? Do you have any additional tips you could share?

Web Intelligence Performance Tips

The recommendations in this document are intended to help improve the general performance of BusinessObjects Web Intelligence documents. To that end, we’ll begin with an overview of Web Intelligence work flows that affect document performance.

Refreshing a Document

When a Web Intelligence document is refreshed, either manually or by the Job Server, the queries in that document are run sequentially. The first query to run is the query on the first tab in the Query Panel. Then the second query runs, and so on. The tabs can be reordered, if needed, to reorder the run sequence.

Once the query results are retrieved and stored in the document, Web Intelligence begins the process of combining results, where necessary, and calculating any variables or formulas in the document. Finally, the reports are rendered with the results.

Documents with many variables and/or many rows of data, can take a substantial amount of time to perform the needed calculations. Therefore, it is recommended to push as many calculations as possible to the universe. This allows the data to be calculated by the database, rather than the report server.

Queries that span multiple contexts will generate multiple SQL statements. How these statements are run, and combined, will depend on the JOIN_BY_SQL parameter in the universe. By default, Web Intelligence will generate a separate SQL statement for each context, as well as each measure from a different table. It will then run those SQL statement in parallel, and merge the results after the data has been retrieved. If the JOIN_BY_SQL parameter is set to Yes, then Web Intelligence will generate a single SQL query, with multiple Selects, and a FULL OUTER JOIN between each Select. The results will then be combined by the database, and the final results will be returned to the document. This can result in a much smaller and more efficient dataset being returned to the document.

However, make sure that queries do not contain objects that are not required for the reports. This is especially true of measure objects, since these typically come from one context only. Removing one unneeded measure can remove one SQL statement from the database.

Rendering a Document

Once the query, or queries, complete, the document will merge the results of multiple queries, then perform any and all needed calculations before rendering the results. When a user goes from one report tab to another, the new tab will run it’s calculations again, and then, once complete, display that tab for the user.

Reports that contain many calculations, in the form of formulas or variables, may render very slowly, depending on the volume of data contained in the report. For example, a report that contains 10,000 rows of data, and 10 columns of formulas or variables, will need to perform 100,000 calculations before rendering the results.

Therefore, significant performance improvements can be gained by reducing the number of rows contained on the report, as well as reducing the number of calculations needed. As mentioned in the previous section, it is much better to do calculations in the universe, than in the document. If the calculations are done in the universe, then the database performs the calculations once, and delivers them to the document. However, if the calculations are done in the report, they will happen when the query is completed, as well as when a user clicks on a different tab. Of course, documents will always have calculations in them, but we want to reduce them as much as possible, especially when there is much data in the report.

Data Reduction

As mentioned, one of the most common reasons for poor report performance is too much data in the report. Often, the reason for too much data is the need for multiple levels of data for drilling. Each level of a hierarchy that is added to a document causes the result set to grow exponentially, along with the document size. However, when data sets are large, alternate techniques need to be employed to reduce the amount of data in the document.

Using OpenDocument: One such technique involves creating a high level document, with only the data needed to display the top level before drilling. Then, using the OpenDocument function, we can create a URL that, when clicked on, will drill into a second document, passing values into prompts in that document. For example, if the user clicks on a Region to drill into that region, the Region name, or ID, is passed to the second document, which is then refreshed for that Region only. This technique keeps the first document small and agile, and keeps the second document small as well. Note that OpenDocument can be used in conjunction with standard drilling. A document can be set up to drill one or two levels within the document, then further drilling can be enabled with the OpenDocument function.

Using Query Drill: Another technique, which is similar to the OpenDocument technique, is Query Drill. The difference is that Query Drill operates entirely within one document. The document is built with only high level data. Then, when a user clicks on a value to drill into it, the query is rerun, with an additional query filter added, which will filter the results to those results needed for the drill. For example, if a user clicks on the West Region, the query is rerun, with a new query filter, where Region = ‘West’. This technique also keeps the document size much smaller.


There are many factors involved in improving the performance of a web Intelligence document. The techniques presented here are only a few. There may be things that can be improved in the universe, or at the database, to improve performance. It’s important to optimize your reporting in any and all places available.

Jonathan Brown from SAP, has also written an excellent paper on Web Intelligence performance here.

Display Input Control Values on Report

A common requirement in the reporting world is to display filtered values on a report. This could include Query Filters (including prompted values), Report Filters (including Drill Filters), or Block Filters. Of course, displaying Query Filters on the report is fairly simple. If the Query has one or more prompts, you would use the UserResponse() function to display the selected value(s) on the report. Otherwise, you could use a formula like this:


Or, of course, you could simply type the values into a cell. The same goes for Block Filters. Since they aren’t dynamic, you can simply type them into a cell.

However, there is no function to display the values selected in an Input Control. This question was recently posted on BOB: “How do I display the values selected in an Input Control on the report?” The solution was fairly simple, but sometimes the simple solutions can be the hardest to find.

To demonstrate this, I’ll use the eFashion universe. I started by creating the following query:

After running the query, I removed the Store name column from the block. This column will be used for the Input Control.

Next, I created an Input Control as follows:

  • Object: Store name
  • Type: Check boxes (Multiple)
  • Number of lines: 14

For all other settings, use the defaults. Now, as you select one or more Stores from the Input Control, the values in the block adjust accordingly. Of course, we can’t see on the report which value(s) are selected. So here’s how we accomplish this:

From the Templates tab of the Report Manager, expand Tables, and drag a Vertical table on to the report. This gives you a blank table, with one column. From the Data tab of the Report Manager, drag the Store name object into the center of one of the cells in the table. At this point, the table should be showing all Stores.

Next, click the Edit button on the Input Control.

In the Edit Input Control box, go to the Dependencies tab. You see Block 1 is selected. Select Block 2 as well.

Click OK. Now, try it out. As you select values in the Input Control, Block 1 continues to show the filtered data, while Block 2 shows the selected Stores.

There you go. Now all you have to do is format as needed, and you’re done. Let me know what you think.

Does Drilling Need to be Painful?

I never know where the inspiration might come from for a blog post. This one just dropped in my lap, so to speak. A co-worker had some questions about drilling in Web Intelligence, and what impact it would have on the database. Well, the answer was a little more complex than I expected. So that got me wondering if other people might be confused about the various drilling options. So this is all about drilling in Web Intelligence, and when to use which options.


Lets start by looking at the drilling options in Web Intelligence, and what they mean. Since drilling works the same in both Rich Client and Web Client, I’ll be working in Rich Client for this discussion, but will mention the difference in work flows, when appropriate.

In Rich Client, when you go to Tools – Options, you will find that the Options dialog box has a Drilling tab. In the Web Client, you will go to InfoView preferences – Web Intelligence to find these options.

The first option allows you to decide what happens when you click the Drill button. By default, when you go into Drill mode, the current report will be placed into drill mode. However, if you select the other option, “Start drill on duplicate report”, then entering drill mode will cause the current report to be duplicated, with the duplicate report being placed into drill mode. In this method, the original report remains in Edit mode, rather than Drill mode.

Next, we have an option that says, “Prompt if drill requires additional data”. When drilling on a document, ideally, the data needed for drilling should already be in the document. But this isn’t always the case. You may need to drill beyond what is in the document. This is called drilling out of scope. More on this later. However, this option allows you to be prompted if your drill action will require the query to be run. Think of it as a warning system, letting you know that drilling may take longer than expected. If you choose this option, you will get a dialogue box, when drilling out of scope, that allows you to select all the needed levels for additional drilling.

“Synchronize drill on report blocks” is an option that should be turned on by default, in my humble opinion. If you have two blocks on a report, such as a table and chart, that show the same data, this option allows you to drill down on both blocks with one click. If you don’t turn this option on, then one block will drill, and the other will be filtered for the drilled value.

If you’ve done any drilling in Web Intelligence, you’re probably aware of the Drill Filter Toolbar that appears above the report when drilling.

When you drill, combo boxes appear in this toolbar that allow you to drill sideways by selecting a different value for the drilled dimension. As the text in the toolbar indicates, you can also drag dimensions onto this toolbar from the Data tab of the Report Manager, and use them as simple report filters. The final option, “Hide Report Filter toolbar on startup”, will prevent this toolbar from appearing when you start drill mode.

Note that any changes you make to these options will take effect the next time you open a document.

Setting up Drill

In order to drill in a Web Intelligence document, Hierarchies (Drill Paths, in 4.x) need to be created in the universe, by the universe designer. This isn’t covered in this post. We’ll assume that the designer has already done this.

When building a query, we typically set up the drill path by defining the Scope of Analysis. This simply means selecting which objects from a Hierarchy we wish to use for the drill path. To do this, start by exposing the Scope of Analysis pane (Click the 3rd button on the toolbar). Then switch the Universe pane from “Display by objects” to “Display by hierarchies”.

Note that it is not necessary to Display by hierarchies. This view is helpful to see the hierarchies that are built in the universe. However, if you are familiar with the hierarchies, you can leave “Display by objects” selected.

You can now drag the objects from the Universe pane to the Scope of Analysis pane that you wish to use for drilling. These objects will be included in the query results, but will not be shown on the report until you start drilling to them. You don’t need to select all the objects from a hierarchy, only the ones you need for drilling.

In the query above, let’s say that we selected Year, Quarter and Month for the Scope of Analysis. On the report, we can drill from Year to Quarter, and Quarter to Month, using the query results stored in the document. However, if we want to drill from Month to Week, we will be drilling beyond the Scope of Analysis defined in the query. This is called drilling out of scope, and requires the query to be rerun. Drilling out of scope is generally not desirable, as it adds additional overhead to the database, and takes longer to drill. Therefore, it’s best to define the needed levels in the Scope of Analysis.

However, it’s also not good to define too many levels in the Scope of Analysis, as each additional level increases the size of the result set stored in the document, exponentially. This can cause the document to open slowly, import and export slowly, and perform sluggishly, especially when refreshing the query(ies). The bottom line is that you want to find the balance between too many levels and too few levels.

Query Drill

As mentioned, for each additional level added to the Scope of Analysis, the size of the result set grows exponentially. This can cause documents to bloat to huge sizes. Query Drill is designed to alleviate this problem. However, it can also be a dangerous feature. Here’s how it works.

Rather than defining a Scope of Analysis, you would simply create your query as usual, and run the query. Once you have the query run, open the Document Properties (File – Properties). Under Document options, you will see a check box labeled “Use query drill”. Click the check box to activate Query drill.

Now, when you drill, for example, on 2003 in the Year column, Web Intelligence will rerun the query, adding a query filter for Year=2003. If you then drill on Q4, the query will be run again, with another query filter added for Quarter=Q4. This keeps the result set nice and small, but allows drilling to all levels needed.

So why is this a dangerous feature? First of all, every time a user drills down, up, or sideways, the database is hit with another query. This can add tremendous overhead to the database, especially if there are many users using this feature.

Second, depending on how long it takes for the query to run, users will have to wait for the results of their drilling action. This can cause users to become annoyed with drilling, and BusinessObjects in general. Therefore, Query Drill should be used with extreme caution.


Yes, there’s some administration to be done with drilling. So, if you’re not an administrator, you can skip this section, although you may find it interesting.

There are some security settings that can be used to control the way users use drill. In the CMC, under Applications – Web Intelligence, there are some advanced settings that apply to drilling. The first one is “Enable drill mode”. With this setting, you can completely eliminate a user’s ability to drill. If you don’t want certain users to use drilling, use this setting to deny drilling.

“Extend scope of analysis” allows a person to drill out of scope. Since drilling out of scope adds overhead to the database, you can limit users ability to drill out of scope by denying this right.

So, with these two settings, you can have some control over drilling. Use these cautiously, as drilling is one of the most useful features of Web Intelligence.


The questions from my co-worker had to do with the impact of drilling on the database. Of course, my answer was along the lines of “It depends”. There are numerous factors that can affect the impact on the database. Hopefully, this article gives you the information you need to make an educated decision regarding the options for drilling. Let me know you have any special tricks or guidelines for drilling.