Pages

Showing posts with label performance monitoring. Show all posts
Showing posts with label performance monitoring. Show all posts

Thursday, 24 April 2014

OBIEE - OBI_METRICS_AGENT on Windows

OBI_METRICS_AGENT


I would like to firstly extend my thanks to Rittman Mead for making this freely available. It managed to fill the gap in performance analysis that I had been struggling to fill. Yes, I had started down the WLST route and was pulling my hair out.

So what did I do to get it to work for me?

So what's the setup: OBIEE 11.1.1.7 on Windows Server 2008 64.

Firstly, download the zip file containing the obi_metrics_agent from https://github.com/RittmanMead/obi-metrics-agent.

The zip file contains a python script, and being windows nothing is easy, we have to go and download and install python. Please make sure you download the correct version for your machine. I donwloaded Python 2.7 for 64Bit Windows (python-2.7.6.amd64.msi) and ran the installer and put Python into C:\Python27.

Job done....... not quite. The obi_metrics_agent needs the lxml library. Fortunately my trusty steed "Google" was at hand and kept nudging me in the right direction.

It transpires that you need to run ez_setup.py and this will download the setup tools allowing you to "easy install" other packages and any dependant packages. Download the ez_setup.py script and place it in the C:\Python27 directory.

Easy I thought and from the command line: C:\Python27\python.exe ez_setup.py and apart from a few error messages about proxy unavailable.... nothing. A few minutes of head-scratching later and I realised that none of the Servers can access the internet, so a few minutes more on the good steed "Google" grazing the plains of knowledge, I dowloaded the setuptools-3.4.4.zip file and placed it in the C:\Python27 directory. Now the ez_setup.py script command completed without error.

Now I have the easy_install.exe and can download the lxml-2.3-py2.7-win-amd64.egg file - once again make sure you get the correct version for your Python install. You will find the easy_install.exe in the Python27\scripts folder, so once again placing the egg file on the server we run c:\Python27\Scripts\easy_install.exe lxml-2.3-py2.7-win-amd64.egg. This unpacks into the C:\Python27\Lib\site-packages directory.

So, if everything has worked so far, you are nearly good to go.

I then unpacked the obi_metrics_agent.zip into the middleware home directory (on my machine D:\OBI) which created a subdirectory off called D:\OBI\obi-metrics-agent-master and here's where the main script resides.

After some faffing around, I settled on the following as appropriate for my needs, yours may be different.

C:\Python27\python.exe D:\OBI\obi-metrics-agent-master\obi-metrics-agent.py --opmnbin D:\OBI\instances\instance1\bin\opmnctl.bat --data-directory D:\OBI\obi-metrics-agent-master\data --interval 10

Now I must admit I have spent a fair amount of time going through the different options and outputs and it realy works, I'll eave it to you to judge, but thanks Rittman Mead.

I have jobs that run every couple of minutes loading the data into a database allowing us to monitor the internals against usage - amazing!

Tuesday, 21 January 2014

Generating a OBIEE Performance Test Harness

You know you should be testing you OBIEE stack for performance, but what exactly should you be doing, and how can you do it?

Another question may be what is your baseline? Do you have a known baseline for performance? What tools do you use?

What we want is a method of generating a test plan where we can run (repeatably) the reports on our dashboards. Ideally we would like to generate a series of tests that we can run before a release to generate a baseline and after a release to ensure that nothing (that we are testing) is broken.

So what is available to us "out of the box" that we can plug in and start using straight away?

We will use a couple of components to build up a test.

  • Usage Tracking
  • NQCMD


Usage tracking: 

Turn on usage tracking for one of your users and open a few dashboards and reports. If you have things configured correctly you should see some entries in the S_NQ_ACCT table (I have made the assumption that you will enable database recording of usage tracking - see how to set this up here.)

The S_NQ_ACCT table tells us what components were run when you viewed the dashboard. To find out what components make up usage tracking query the "QUERY_SRC_CD" column. A sample of results may show something like:
DashboardPrompt
Report
ValuePrompt
Memeber Browser Display Values
Soap
rawSQL

The logical query run for each component is in the "QUERY_TEXT" column, though this truncates the full query after a number of characters and, if like me you have some very large logical queries, it may be worthwhile getting the query out of the "QUERY_BLOB" column.

NQCMD:

I have already discussed NQCMD here but it may be worthwhile going over some of the concepts briefly. NQCMD is a mechanism for executing a logical query against a repository for a user. This means we can execute a logical query against a repository of our choice and run the query as a named user.

The command is in the format
MW_HOME\Oracle_BI1\bifoundation\server\bin\nqcmd.exe -d coreapplication_OH140000583 -u <username> -p <password> -s <Query input file> -T -q  1> output.out

Running the help gives the following output
MW_HOME\Oracle_BI1\bifoundation\server\bin\nqcmd.exe -help

-------------------------------------------------------------------------------
          Oracle BI ODBC Client
          Copyright (c) 1997-2011 Oracle Corporation, All rights reserved
-------------------------------------------------------------------------------


Argument error near: -help
Command: MW_HOME\Oracle_BI1\bifoundation\server\bin\nqcmd.exe - a command line client which can issue SQL statements
                 against either Oracle BI server or a variety of ODBC compliant backend databases.
SYNOPSIS
         nqcmd [OPTION]...
DESCRIPTION
         -d<data source name>
         -u<user name>
         -p<password>
         -s<sql input file name>
         -o<output result file name>
         -D<Delimiter>
         -C<# number of fetched rows by column-wise binding>
         -R<# number of fetched rows by row-wise binding>
         -a (a flag to enable async processing)
         -f (a flag to enable to flush output file for each write)
         -H (a flag to enable to open/close a request handle for each query)
         -z (a flag to enable UTF8 instead of ACP)
         -utf16 (a flag to enable UTF16 instead of ACP)
         -q (a flag to turn off row output)
         -NoFetch (a flag to disable data fetch with query execution)
         -NotForwardCursor (a flag to disable forwardonly cursor)
         -v (a flag to display the version)
         -SessionVar <SessionVarName>=<SessionVarVAlue>

To use NQCMD we need to pass a logical query SQL in a file.
The logical query file can look something like

SET VARIABLE PRODUCT_NAME='T00103', DISABLE_PLAN_CACHE_HIT=1, DISABLE_CACHE_HIT=1, LOGLEVEL=2, QUERY_NAME='Q0213EB1783381CBEC92AB1166D5FF8FC', TEST=1:SELECT .................. FETCH FIRST 25001 ROWS ONLY

Now we have looked at the two parts we'll use let's put it all together.

To start we need to mine the usage tracking table for the logical queries of the reports on our dashboards that we wish to test.
Each query should be in its own file.
Ideally each query should be uniquely identifiable, so we can trace a query back to a relevant report or dashboard and also as a means of identification when running the test.

Generating the test logical query input files: I have knocked up a SQL script that extracts the queries from the Usage Tracking Table and saves them in a series of files. Because PL/SQL cannot spool to multiple files dynamically, the following script generates an intermediate file, which is then called and executed.

SET TERMOUT OFF
SET SERVEROUTPUT ON
SET ECHO OFF
SET DEFINE OFF
SET FEEDBACK OFF
SET LINESIZE 5000
SPOOL intermediate_file.sql

DECLARE
  CURSOR cr_qrys  IS
      SELECT DISTINCT TO_CHAR (query_blob) AS query_text, query_key
        FROM s_nq_acct
       WHERE     query_src_cd = 'Report'
             AND dbms_lob.getlength (query_blob) <= 3900
             AND ERROR_TEXT IS NULL
             AND row_count > 0
    ORDER BY query_key;

  i          NUMBER := 0;
  filename   VARCHAR2 (5);
  p1         VARCHAR2 (5000);
BEGIN
  dbms_output.enable (NULL);

  FOR lr_qrys IN cr_qrys LOOP
    i          := i + 1;
    filename   := LPAD (TO_CHAR (i), 5, '0');
    p1         :=
         'SET VARIABLE PRODUCT_NAME=''T'
      || filename
      || ''', DISABLE_PLAN_CACHE_HIT=1, DISABLE_CACHE_HIT=1, LOGLEVEL=2, QUERY_NAME=''Q'
      || lr_qrys.query_key
      || ''', TEST=1:'
      || lr_qrys.query_text;
    dbms_output.put_line ('SET COLSEP ,  
SET PAGESIZE 0
SET TRIMSPOOL ON 
SET HEADSEP OFF
SET FEEDBACK OFF
SET DEFINE OFF
SET ECHO OFF
SET TERMOUT OFF
SET LINESIZE 5000
SPOOL test_file_' || filename || '.inp
SELECT q''[' || p1 || ']''
FROM DUAL;
SPOOL OFF');
  END LOOP;
END;
/

SPOOL OFF
PROMPT     executing intermediate file
@intermediate_file.sql;
SET SERVEROUTPUT OFF

This generates a series of files with the following structure test_file_xxxxx.inp where the xxxxx matches the test PRODUCT_NAME variable of the query.

This is our the first version of our baseline series of tests. As new dashboards and reports are released you will need to update the test input files with the new report queries and potentially remove those reports that are no longer relevant.

The next step is to pass each file through the NQCMD in turn and capture the output. Assuming the logical query test input files are in a subdirectory called testcases (and using DOS) we can loop through the .inp files as follows

for /r %%i in (testcases\*) do (
<mw_home>\Oracle_BI1\bifoundation\server\bin\nqcmd.exe -d <dsn to repository> -u <username> -p <password> -s %%i -T -q >%%~ni.out
)

Once the above loop has finished we will have a series of files called test_file_xxxxx.out

Peering into an output file you will see they have a structure as detailed below


-------------------------------------------------------------------------------
          Oracle BI ODBC Client
          Copyright (c) 1997-2011 Oracle Corporation, All rights reserved
-------------------------------------------------------------------------------


SET VARIABLE PRODUCT_NAME='T03818', DISABLE_PLAN_CACHE_HIT=1, DISABLE_CACHE_HIT=1, LOGLEVEL=2, QUERY_NAME='Q4DFD2F426D93BC2DA37036577533BF8B', TEST=1:SELECT ........... ASC NULLS LAST, 3 ASC NULLS LAST FETCH FIRST 200001 ROWS ONLY

SET VARIABLE PRODUCT_NAME='T03818', DISABLE_PLAN_CACHE_HIT=1, DISABLE_CACHE_HIT=1, LOGLEVEL=2, QUERY_NAME='Q4DFD2F426D93BC2DA37036577533BF8B', TEST=1:SELECT .............................

--------------------------------------------------------------------------------------------------------------------------------------------------------
s_0          s_1                                                                                                  s_2          3 s_4                    
--------------------------------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------------------------------------------------------------------------------
Clock time: batch start: 10:34:21.000 Query from: 10:34:22.000 to: 10:35:22.000 Row count: 15
 total: 59.920 prepare: 0.016 execute: 59.889 fetch: 0.015
Cumulative time(seconds): Batch elapsed: 60 Query total: 59.920 prepare: 0.016, execute: 59.889, fetch: 0.015, query count:  1, cumulative rows:  15

--------------------------------------------------------------------------------------------------------------------------------------------------------


Processed: 1 queries

You can see from the contents we have a wealth of information and using a couple of scripts you can extract the values for PRODUCT_NAME and the timings near the end of each file. When you process them together you should have something like

Date Test No Batch Start Query From Query To Row Count Total Time Prepare Time Execute Time Fetch Time Batch Elapsed Prepare Query Total Fetch Query Count Cumulative rows
21/01/2014 'T00005' 15:24:08 15:24:11 15:24:17 2 5.906 0.016 5.859 0.031 8 0.016 5.859 0.031 1 2
21/01/2014 'T00007' 15:24:21 15:24:22 15:24:23 9 0.593 0 0.578 0.015 1 0 0.578 0.015 1 9
21/01/2014 'T00008' 15:24:24 15:24:25 15:24:38 35 12.671 0.031 12.624 0.016 13 0.031 12.624 0.016 1 35

Which you can load into your favourite database and perhaps even build a dashboard on.





Monday, 20 May 2013

Aggregates, source control and Monitoring

I have been fortunate enough to have some time to do some housekeeping on the OBIEE systems.

Monitoring.


I think it's fair to say that we had very little monitoring in place, and we knew things were bad when users phoned to say that their pages were 'slow'. Now to get a user to phone you takes some doing, usually they'll live with it, so things must get pretty bad before they do. Luckily this is only n our exploratory areas, otherwise people would be questioning just what it is we do.

Source Control


No-one like interrupting their work to perform a manual step, like source control. it is one of those things that somehow gets missed the one time you really need it. I've been looking at ways to automate this, behind the scenes, totally hands off using SVN (Subversion)

Aggregates 


Need I say more. Level heirarchies and aggregates go hand in hand. If you can provide the least number of rows back to satisfy your answers report, it will run a lot faster that running an aggregate operation over raw data.

Expect a couple of posts over the coming days detailing just what I've been doing.

Thursday, 28 June 2012

OBIEE Performance Monitoring

Here's a little something thats hidden under the covers.

You need to alter the usual URL that you call the dashboard with

http://mydashboard:7001/answers/saw.dll

to

http://mydashboard:7001/answers/saw.dll?perfmon

If you are not logged in you will be prompted to login, then you have a huge page of realtime server stats. The data updates dynamically and once I get a handle on the meanings behind the numbers I'll update you.