Pages

Monday 28 April 2014

OBIEE - Catalog Cleanup

Catalog Cleanup


Do you sometimes look at the catalog and wonder what kind of timebomb is lurking in there? I have nightmares. You've designed a catalog layout and asked developers to follow standards and follow the guidelines, but one year later......? What about two or even three years later? What Answers work has been started and abandoned, unfinished and unworking? What Answers remain from 'deleted' or 'hidden' dashboards and now no longer work due to repository changes?

The challenge


We are a small team of developers so so for us automation of tasks is key. For each report in the catalog what we need to know is:
Does this report generate a useable query? (Is it broken)?
Is this report on a dashboard, or more than one dashboard?
If the report is on a dashbord, is it in the correct directory?

We anticipate that this will be repeated periodically, you can never be sure what creeps in and slips through the cracks, but at least the generation of the report can be automated and scheduled.

At a high level what we do is to combine the output of some Catalog reports output, and use NQCMD to generate logical queries and track the outputs.

The Catalog Reports


Firstly we will generate a Dashboard Report and then an Analysis Report, and we'll upload the data into some database tables and stitch the two together with an outer join so we can see those reports on dashboards as well as those without.

We call the Analysis Report and a Dashboard with a script like below; Note, your system will have different settings.

D:\OBI\instances\instance1\bifoundation\OracleBIPresentationServicesComponent\coreapplication_obips1\catalogmanager\runcat.cmd -cmd report -excelFormat -distinct -offline D:\OBI\instances\instance1\bifoundation\OracleBIPresentationServicesComponent\coreapplication_obips1\catalog\samplesales -forceOutputFile D:\CatalogCapture\Analysis.txt -type "Analysis" "Name" "Path" "Table" "Column" "Subject Area"

D:\OBI\instances\instance1\bifoundation\OracleBIPresentationServicesComponent\coreapplication_obips1\catalogmanager\runcat.cmd -cmd report -excelFormat -distinct -offline D:\OBI\instances\instance1\bifoundation\OracleBIPresentationServicesComponent\coreapplication_obips1\catalog\samplesales -forceOutputFile D:\CatalogCapture\Dashboard.txt -type "Dashboard" "Name" "Path" "Dashboard Style" "Dashboard Page Name" "Dashboard Page Path" "Dashboard Page Description" "Analysis Name" "Analysis Path"

Join the two outputs on "Path" and "Analysis Path" - think outer join here because we still want to process those Analysis not on dashboards.

Generating a Logical Query


What we are going to do is pass the each analysis into NQCMD using the current repository to test whether a valid Logical Query.

Create a text file called cred.txt and make the contents something like
login=weblogic
pwd=weblogic123

Now using the report output from part 1 we can run the following to extract the query from the catalog for the answer. The Query will be saved into a file called myquery.lsql
D:\OBI\instances\instance1\bifoundation\OracleBIPresentationServicesComponent\coreapplication_obips1\catalogmanager\runcat.cmd -cmd report -online http://%SERVR%/analytics/saw.dll -credentials "D:\CatalogCapture\creds.txt" -forceOutputFile "myquery.lsql" -folder "/shared/...../myreport" -type "Analysis" "SQL"

One of two things happen here:
1. The file containing the query is generated, or
2. An "empty file" is generated.

If we have an empty file the Answer failed to generate a Query and should be flagged for investigation.

However, if we have a query in the file, we need to pass the contents through NQCMD to see if the Answer works.

Extending into NQCMD


We have previously used NQCMD to do impact analysis on repository changes and can use the functionality almost unchanged.

"D:\OBI\Oracle_BI1\bifoundation\server\bin\nqcmd.exe" -d coreapplication_OH140093583 -u weblogic -p weblogic123 -s "myquery.lsql" -o "output.lres"

Once again one of two things can happen here:
1. The logical query passed in returns a result, or
2. An error is generated.

The error may be something like

[10058][State: S1000] [NQODBC] [SQL_STATE: S1000] [nQSError: 10058] A general error has occurred.
[nQSError: 43113] Message returned from OBIS.
[nQSError: 43119] Query Failed: 
[nQSError: 17001] Oracle Error code: 942, message: ORA-00942: table or view does not exist
 at OCI call OCIStmtExecute.
[nQSError: 17010] SQL statement preparation failed.
Statement execute failed


Processed: 1 queries
Encountered 1  errors

Once again flag the error.

Wrapping things up and bringing it together


From each Answer we now know
1. Is it used on a dashboard?
2. Where is the Answer stored?
3. Does it generate a logical query?
4. Does the logical query work?

Now we can focus efforts on fixing, rewriting or deleting those Analysis on dashboards, then those that are not on a dashboard but do not work.
Your initial steps may just be to identify those Answers that don't generate a logical query for a maintenance routine?

Thursday 24 April 2014

OBIEE - OBI_METRICS_AGENT on Windows

OBI_METRICS_AGENT


I would like to firstly extend my thanks to Rittman Mead for making this freely available. It managed to fill the gap in performance analysis that I had been struggling to fill. Yes, I had started down the WLST route and was pulling my hair out.

So what did I do to get it to work for me?

So what's the setup: OBIEE 11.1.1.7 on Windows Server 2008 64.

Firstly, download the zip file containing the obi_metrics_agent from https://github.com/RittmanMead/obi-metrics-agent.

The zip file contains a python script, and being windows nothing is easy, we have to go and download and install python. Please make sure you download the correct version for your machine. I donwloaded Python 2.7 for 64Bit Windows (python-2.7.6.amd64.msi) and ran the installer and put Python into C:\Python27.

Job done....... not quite. The obi_metrics_agent needs the lxml library. Fortunately my trusty steed "Google" was at hand and kept nudging me in the right direction.

It transpires that you need to run ez_setup.py and this will download the setup tools allowing you to "easy install" other packages and any dependant packages. Download the ez_setup.py script and place it in the C:\Python27 directory.

Easy I thought and from the command line: C:\Python27\python.exe ez_setup.py and apart from a few error messages about proxy unavailable.... nothing. A few minutes of head-scratching later and I realised that none of the Servers can access the internet, so a few minutes more on the good steed "Google" grazing the plains of knowledge, I dowloaded the setuptools-3.4.4.zip file and placed it in the C:\Python27 directory. Now the ez_setup.py script command completed without error.

Now I have the easy_install.exe and can download the lxml-2.3-py2.7-win-amd64.egg file - once again make sure you get the correct version for your Python install. You will find the easy_install.exe in the Python27\scripts folder, so once again placing the egg file on the server we run c:\Python27\Scripts\easy_install.exe lxml-2.3-py2.7-win-amd64.egg. This unpacks into the C:\Python27\Lib\site-packages directory.

So, if everything has worked so far, you are nearly good to go.

I then unpacked the obi_metrics_agent.zip into the middleware home directory (on my machine D:\OBI) which created a subdirectory off called D:\OBI\obi-metrics-agent-master and here's where the main script resides.

After some faffing around, I settled on the following as appropriate for my needs, yours may be different.

C:\Python27\python.exe D:\OBI\obi-metrics-agent-master\obi-metrics-agent.py --opmnbin D:\OBI\instances\instance1\bin\opmnctl.bat --data-directory D:\OBI\obi-metrics-agent-master\data --interval 10

Now I must admit I have spent a fair amount of time going through the different options and outputs and it realy works, I'll eave it to you to judge, but thanks Rittman Mead.

I have jobs that run every couple of minutes loading the data into a database allowing us to monitor the internals against usage - amazing!