Pages

Tuesday 21 January 2014

Generating a OBIEE Performance Test Harness

You know you should be testing you OBIEE stack for performance, but what exactly should you be doing, and how can you do it?

Another question may be what is your baseline? Do you have a known baseline for performance? What tools do you use?

What we want is a method of generating a test plan where we can run (repeatably) the reports on our dashboards. Ideally we would like to generate a series of tests that we can run before a release to generate a baseline and after a release to ensure that nothing (that we are testing) is broken.

So what is available to us "out of the box" that we can plug in and start using straight away?

We will use a couple of components to build up a test.

  • Usage Tracking
  • NQCMD


Usage tracking: 

Turn on usage tracking for one of your users and open a few dashboards and reports. If you have things configured correctly you should see some entries in the S_NQ_ACCT table (I have made the assumption that you will enable database recording of usage tracking - see how to set this up here.)

The S_NQ_ACCT table tells us what components were run when you viewed the dashboard. To find out what components make up usage tracking query the "QUERY_SRC_CD" column. A sample of results may show something like:
DashboardPrompt
Report
ValuePrompt
Memeber Browser Display Values
Soap
rawSQL

The logical query run for each component is in the "QUERY_TEXT" column, though this truncates the full query after a number of characters and, if like me you have some very large logical queries, it may be worthwhile getting the query out of the "QUERY_BLOB" column.

NQCMD:

I have already discussed NQCMD here but it may be worthwhile going over some of the concepts briefly. NQCMD is a mechanism for executing a logical query against a repository for a user. This means we can execute a logical query against a repository of our choice and run the query as a named user.

The command is in the format
MW_HOME\Oracle_BI1\bifoundation\server\bin\nqcmd.exe -d coreapplication_OH140000583 -u <username> -p <password> -s <Query input file> -T -q  1> output.out

Running the help gives the following output
MW_HOME\Oracle_BI1\bifoundation\server\bin\nqcmd.exe -help

-------------------------------------------------------------------------------
          Oracle BI ODBC Client
          Copyright (c) 1997-2011 Oracle Corporation, All rights reserved
-------------------------------------------------------------------------------


Argument error near: -help
Command: MW_HOME\Oracle_BI1\bifoundation\server\bin\nqcmd.exe - a command line client which can issue SQL statements
                 against either Oracle BI server or a variety of ODBC compliant backend databases.
SYNOPSIS
         nqcmd [OPTION]...
DESCRIPTION
         -d<data source name>
         -u<user name>
         -p<password>
         -s<sql input file name>
         -o<output result file name>
         -D<Delimiter>
         -C<# number of fetched rows by column-wise binding>
         -R<# number of fetched rows by row-wise binding>
         -a (a flag to enable async processing)
         -f (a flag to enable to flush output file for each write)
         -H (a flag to enable to open/close a request handle for each query)
         -z (a flag to enable UTF8 instead of ACP)
         -utf16 (a flag to enable UTF16 instead of ACP)
         -q (a flag to turn off row output)
         -NoFetch (a flag to disable data fetch with query execution)
         -NotForwardCursor (a flag to disable forwardonly cursor)
         -v (a flag to display the version)
         -SessionVar <SessionVarName>=<SessionVarVAlue>

To use NQCMD we need to pass a logical query SQL in a file.
The logical query file can look something like

SET VARIABLE PRODUCT_NAME='T00103', DISABLE_PLAN_CACHE_HIT=1, DISABLE_CACHE_HIT=1, LOGLEVEL=2, QUERY_NAME='Q0213EB1783381CBEC92AB1166D5FF8FC', TEST=1:SELECT .................. FETCH FIRST 25001 ROWS ONLY

Now we have looked at the two parts we'll use let's put it all together.

To start we need to mine the usage tracking table for the logical queries of the reports on our dashboards that we wish to test.
Each query should be in its own file.
Ideally each query should be uniquely identifiable, so we can trace a query back to a relevant report or dashboard and also as a means of identification when running the test.

Generating the test logical query input files: I have knocked up a SQL script that extracts the queries from the Usage Tracking Table and saves them in a series of files. Because PL/SQL cannot spool to multiple files dynamically, the following script generates an intermediate file, which is then called and executed.

SET TERMOUT OFF
SET SERVEROUTPUT ON
SET ECHO OFF
SET DEFINE OFF
SET FEEDBACK OFF
SET LINESIZE 5000
SPOOL intermediate_file.sql

DECLARE
  CURSOR cr_qrys  IS
      SELECT DISTINCT TO_CHAR (query_blob) AS query_text, query_key
        FROM s_nq_acct
       WHERE     query_src_cd = 'Report'
             AND dbms_lob.getlength (query_blob) <= 3900
             AND ERROR_TEXT IS NULL
             AND row_count > 0
    ORDER BY query_key;

  i          NUMBER := 0;
  filename   VARCHAR2 (5);
  p1         VARCHAR2 (5000);
BEGIN
  dbms_output.enable (NULL);

  FOR lr_qrys IN cr_qrys LOOP
    i          := i + 1;
    filename   := LPAD (TO_CHAR (i), 5, '0');
    p1         :=
         'SET VARIABLE PRODUCT_NAME=''T'
      || filename
      || ''', DISABLE_PLAN_CACHE_HIT=1, DISABLE_CACHE_HIT=1, LOGLEVEL=2, QUERY_NAME=''Q'
      || lr_qrys.query_key
      || ''', TEST=1:'
      || lr_qrys.query_text;
    dbms_output.put_line ('SET COLSEP ,  
SET PAGESIZE 0
SET TRIMSPOOL ON 
SET HEADSEP OFF
SET FEEDBACK OFF
SET DEFINE OFF
SET ECHO OFF
SET TERMOUT OFF
SET LINESIZE 5000
SPOOL test_file_' || filename || '.inp
SELECT q''[' || p1 || ']''
FROM DUAL;
SPOOL OFF');
  END LOOP;
END;
/

SPOOL OFF
PROMPT     executing intermediate file
@intermediate_file.sql;
SET SERVEROUTPUT OFF

This generates a series of files with the following structure test_file_xxxxx.inp where the xxxxx matches the test PRODUCT_NAME variable of the query.

This is our the first version of our baseline series of tests. As new dashboards and reports are released you will need to update the test input files with the new report queries and potentially remove those reports that are no longer relevant.

The next step is to pass each file through the NQCMD in turn and capture the output. Assuming the logical query test input files are in a subdirectory called testcases (and using DOS) we can loop through the .inp files as follows

for /r %%i in (testcases\*) do (
<mw_home>\Oracle_BI1\bifoundation\server\bin\nqcmd.exe -d <dsn to repository> -u <username> -p <password> -s %%i -T -q >%%~ni.out
)

Once the above loop has finished we will have a series of files called test_file_xxxxx.out

Peering into an output file you will see they have a structure as detailed below


-------------------------------------------------------------------------------
          Oracle BI ODBC Client
          Copyright (c) 1997-2011 Oracle Corporation, All rights reserved
-------------------------------------------------------------------------------


SET VARIABLE PRODUCT_NAME='T03818', DISABLE_PLAN_CACHE_HIT=1, DISABLE_CACHE_HIT=1, LOGLEVEL=2, QUERY_NAME='Q4DFD2F426D93BC2DA37036577533BF8B', TEST=1:SELECT ........... ASC NULLS LAST, 3 ASC NULLS LAST FETCH FIRST 200001 ROWS ONLY

SET VARIABLE PRODUCT_NAME='T03818', DISABLE_PLAN_CACHE_HIT=1, DISABLE_CACHE_HIT=1, LOGLEVEL=2, QUERY_NAME='Q4DFD2F426D93BC2DA37036577533BF8B', TEST=1:SELECT .............................

--------------------------------------------------------------------------------------------------------------------------------------------------------
s_0          s_1                                                                                                  s_2          3 s_4                    
--------------------------------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------------------------------------------------------------------------------
Clock time: batch start: 10:34:21.000 Query from: 10:34:22.000 to: 10:35:22.000 Row count: 15
 total: 59.920 prepare: 0.016 execute: 59.889 fetch: 0.015
Cumulative time(seconds): Batch elapsed: 60 Query total: 59.920 prepare: 0.016, execute: 59.889, fetch: 0.015, query count:  1, cumulative rows:  15

--------------------------------------------------------------------------------------------------------------------------------------------------------


Processed: 1 queries

You can see from the contents we have a wealth of information and using a couple of scripts you can extract the values for PRODUCT_NAME and the timings near the end of each file. When you process them together you should have something like

Date Test No Batch Start Query From Query To Row Count Total Time Prepare Time Execute Time Fetch Time Batch Elapsed Prepare Query Total Fetch Query Count Cumulative rows
21/01/2014 'T00005' 15:24:08 15:24:11 15:24:17 2 5.906 0.016 5.859 0.031 8 0.016 5.859 0.031 1 2
21/01/2014 'T00007' 15:24:21 15:24:22 15:24:23 9 0.593 0 0.578 0.015 1 0 0.578 0.015 1 9
21/01/2014 'T00008' 15:24:24 15:24:25 15:24:38 35 12.671 0.031 12.624 0.016 13 0.031 12.624 0.016 1 35

Which you can load into your favourite database and perhaps even build a dashboard on.





No comments:

Post a Comment