Pages

Monday 25 February 2013

OBIEE reports performance


OBIEE Reports performance

I am assuming you are using a data warehouse (DWH) environment for the reports, rather than a live OLTP.

Now in normal circumstances, Bitmap indexes are best suited for DWH because there is huge amounts of data to handle.
Bitmap indexes work best where the change in data (transactions) is slow. Bitmap indexes consume very little space too, compared to the common b-tree index. And with tables containing millions, billions, or more rows of data, this becomes an important aspect.
A common assumption also says that bitmap works best with columns having lesser distinct values or cardinality. But it is best to look for the amount of data updated from concurrent systems. If that's low, always use bitmap.

Remember that we talk about indexes when we mean performance tuning of the database or DWH.


Best practices include - 


Using star schema model wherever possible. Snow-flaking affects performance. So if you can convert a snow flake to star by using some alias tables or multiple LTS, go for it.

Using aggregate tables wherever possible. You probably won't be able to create aggregate tables for every possible combination of levels in all the heirarchies, but if you check your usage statistics you'll see which are the most common (or longest running) queries and you can plan accordingly.

Avoid excessive use of functions like cast, nvl, lpad, rpad, trim etc. or push them to the ETL phase. Functions may kill the indexes and remember that every foreign key of your tables would also be acting as an index in the star or snowflake schema. Actually unless you build function based indexes, assume that functions in join conditions will destroy performance.

Avoiding unnecessary calculations or again push them to the ETL phase if possible. Get a proper DWH built by the ETL team, in accordance to the reporting requirements.

Using filter functions instead of case when statements. Filter functions include an internal where clause which helps them to use less data for comparison and hence perform better.

Using Union All instead of Union wherever possible.

Using materialized views (for query rewrite) when some complex queries are needed. Assuming that your database is setup to allow query rewrite, materialized views can significantly improve performance, without rework in the repository or front-end. Remember as always, that this is a band-aid solution and at some stage you will need to migrate your repository to use the materialized view, and materialized views bring their own complications and administration.

Some of the NQSConfig.ini parameters may also be altered for a better tuning.

And always check the NQQuery.log file to see the final query that is being executed for a report. That may also tell you what alterations might be needed for a better performance.

In some cases, you are probably looking to run a few reports faster which are taking some overly long time to open. So the best/quickest suggestion would be to use a proper cache mechanism. There are various ways in which you can manage the cache for your reports to run faster. The easiest way to pre-fill the cache with the report would be to have an agent 'run' the report into the cache, thus seeding the cache for the following users.

Friday 15 February 2013

OBIEE Repository Misconception


Consistency Check Misconception


Passing a consistency check does not guarantee that a business model is constructed correctly, but it does rule out many common problems.

You've consistency checked your repository; there are no errors, or warnings.

This is not proof that your business model is correct and should not be taken as a validation of any intended results. The consistency check tells you that, from the business model, it is possible to construct sensible queries against the physical layer.

Wednesday 6 February 2013

Dynamic Substrings

The challenge was to extract part of an IP address. Simple enough you'd think given that an IP address has a format of 111.222.333.444 (four blocks of 3 digits, seperated by the ".").

INSTR function takes four values:
INSTR (string1, string2, number, number)

LOCATE function takes three values
LOCATE(expr1, expr2,int)  

Eg IP address: '111.222.333.444'

The problem - remove the last block of numbers:
The desired result

SUBSTRING('111.222.333.444', 1, INSTR ('111.222.333.444', '.', 1,3) -1)
111.222.333

But what if the blocks of numbers are represented by numbers (i.e. no leading zeros), this wil give us an irregular pattern, with only the decimal point as a marker to indicate the break between sections.

It turns out to be fairly simple. Much as on the Oracle database side when using SUBSTR - a negative number works from the end of the string.

SUBSTRING('111.222.333.444' FROM 1 FOR LOCATE('.','111.222.333.444',-1)-1)

Putting a negative in the locate integer parameter makes it work in reverse.