Pages

Showing posts with label nqcmd. Show all posts
Showing posts with label nqcmd. Show all posts

Monday, 28 April 2014

OBIEE - Catalog Cleanup

Catalog Cleanup


Do you sometimes look at the catalog and wonder what kind of timebomb is lurking in there? I have nightmares. You've designed a catalog layout and asked developers to follow standards and follow the guidelines, but one year later......? What about two or even three years later? What Answers work has been started and abandoned, unfinished and unworking? What Answers remain from 'deleted' or 'hidden' dashboards and now no longer work due to repository changes?

The challenge


We are a small team of developers so so for us automation of tasks is key. For each report in the catalog what we need to know is:
Does this report generate a useable query? (Is it broken)?
Is this report on a dashboard, or more than one dashboard?
If the report is on a dashbord, is it in the correct directory?

We anticipate that this will be repeated periodically, you can never be sure what creeps in and slips through the cracks, but at least the generation of the report can be automated and scheduled.

At a high level what we do is to combine the output of some Catalog reports output, and use NQCMD to generate logical queries and track the outputs.

The Catalog Reports


Firstly we will generate a Dashboard Report and then an Analysis Report, and we'll upload the data into some database tables and stitch the two together with an outer join so we can see those reports on dashboards as well as those without.

We call the Analysis Report and a Dashboard with a script like below; Note, your system will have different settings.

D:\OBI\instances\instance1\bifoundation\OracleBIPresentationServicesComponent\coreapplication_obips1\catalogmanager\runcat.cmd -cmd report -excelFormat -distinct -offline D:\OBI\instances\instance1\bifoundation\OracleBIPresentationServicesComponent\coreapplication_obips1\catalog\samplesales -forceOutputFile D:\CatalogCapture\Analysis.txt -type "Analysis" "Name" "Path" "Table" "Column" "Subject Area"

D:\OBI\instances\instance1\bifoundation\OracleBIPresentationServicesComponent\coreapplication_obips1\catalogmanager\runcat.cmd -cmd report -excelFormat -distinct -offline D:\OBI\instances\instance1\bifoundation\OracleBIPresentationServicesComponent\coreapplication_obips1\catalog\samplesales -forceOutputFile D:\CatalogCapture\Dashboard.txt -type "Dashboard" "Name" "Path" "Dashboard Style" "Dashboard Page Name" "Dashboard Page Path" "Dashboard Page Description" "Analysis Name" "Analysis Path"

Join the two outputs on "Path" and "Analysis Path" - think outer join here because we still want to process those Analysis not on dashboards.

Generating a Logical Query


What we are going to do is pass the each analysis into NQCMD using the current repository to test whether a valid Logical Query.

Create a text file called cred.txt and make the contents something like
login=weblogic
pwd=weblogic123

Now using the report output from part 1 we can run the following to extract the query from the catalog for the answer. The Query will be saved into a file called myquery.lsql
D:\OBI\instances\instance1\bifoundation\OracleBIPresentationServicesComponent\coreapplication_obips1\catalogmanager\runcat.cmd -cmd report -online http://%SERVR%/analytics/saw.dll -credentials "D:\CatalogCapture\creds.txt" -forceOutputFile "myquery.lsql" -folder "/shared/...../myreport" -type "Analysis" "SQL"

One of two things happen here:
1. The file containing the query is generated, or
2. An "empty file" is generated.

If we have an empty file the Answer failed to generate a Query and should be flagged for investigation.

However, if we have a query in the file, we need to pass the contents through NQCMD to see if the Answer works.

Extending into NQCMD


We have previously used NQCMD to do impact analysis on repository changes and can use the functionality almost unchanged.

"D:\OBI\Oracle_BI1\bifoundation\server\bin\nqcmd.exe" -d coreapplication_OH140093583 -u weblogic -p weblogic123 -s "myquery.lsql" -o "output.lres"

Once again one of two things can happen here:
1. The logical query passed in returns a result, or
2. An error is generated.

The error may be something like

[10058][State: S1000] [NQODBC] [SQL_STATE: S1000] [nQSError: 10058] A general error has occurred.
[nQSError: 43113] Message returned from OBIS.
[nQSError: 43119] Query Failed: 
[nQSError: 17001] Oracle Error code: 942, message: ORA-00942: table or view does not exist
 at OCI call OCIStmtExecute.
[nQSError: 17010] SQL statement preparation failed.
Statement execute failed


Processed: 1 queries
Encountered 1  errors

Once again flag the error.

Wrapping things up and bringing it together


From each Answer we now know
1. Is it used on a dashboard?
2. Where is the Answer stored?
3. Does it generate a logical query?
4. Does the logical query work?

Now we can focus efforts on fixing, rewriting or deleting those Analysis on dashboards, then those that are not on a dashboard but do not work.
Your initial steps may just be to identify those Answers that don't generate a logical query for a maintenance routine?

Tuesday, 21 January 2014

Generating a OBIEE Performance Test Harness

You know you should be testing you OBIEE stack for performance, but what exactly should you be doing, and how can you do it?

Another question may be what is your baseline? Do you have a known baseline for performance? What tools do you use?

What we want is a method of generating a test plan where we can run (repeatably) the reports on our dashboards. Ideally we would like to generate a series of tests that we can run before a release to generate a baseline and after a release to ensure that nothing (that we are testing) is broken.

So what is available to us "out of the box" that we can plug in and start using straight away?

We will use a couple of components to build up a test.

  • Usage Tracking
  • NQCMD


Usage tracking: 

Turn on usage tracking for one of your users and open a few dashboards and reports. If you have things configured correctly you should see some entries in the S_NQ_ACCT table (I have made the assumption that you will enable database recording of usage tracking - see how to set this up here.)

The S_NQ_ACCT table tells us what components were run when you viewed the dashboard. To find out what components make up usage tracking query the "QUERY_SRC_CD" column. A sample of results may show something like:
DashboardPrompt
Report
ValuePrompt
Memeber Browser Display Values
Soap
rawSQL

The logical query run for each component is in the "QUERY_TEXT" column, though this truncates the full query after a number of characters and, if like me you have some very large logical queries, it may be worthwhile getting the query out of the "QUERY_BLOB" column.

NQCMD:

I have already discussed NQCMD here but it may be worthwhile going over some of the concepts briefly. NQCMD is a mechanism for executing a logical query against a repository for a user. This means we can execute a logical query against a repository of our choice and run the query as a named user.

The command is in the format
MW_HOME\Oracle_BI1\bifoundation\server\bin\nqcmd.exe -d coreapplication_OH140000583 -u <username> -p <password> -s <Query input file> -T -q  1> output.out

Running the help gives the following output
MW_HOME\Oracle_BI1\bifoundation\server\bin\nqcmd.exe -help

-------------------------------------------------------------------------------
          Oracle BI ODBC Client
          Copyright (c) 1997-2011 Oracle Corporation, All rights reserved
-------------------------------------------------------------------------------


Argument error near: -help
Command: MW_HOME\Oracle_BI1\bifoundation\server\bin\nqcmd.exe - a command line client which can issue SQL statements
                 against either Oracle BI server or a variety of ODBC compliant backend databases.
SYNOPSIS
         nqcmd [OPTION]...
DESCRIPTION
         -d<data source name>
         -u<user name>
         -p<password>
         -s<sql input file name>
         -o<output result file name>
         -D<Delimiter>
         -C<# number of fetched rows by column-wise binding>
         -R<# number of fetched rows by row-wise binding>
         -a (a flag to enable async processing)
         -f (a flag to enable to flush output file for each write)
         -H (a flag to enable to open/close a request handle for each query)
         -z (a flag to enable UTF8 instead of ACP)
         -utf16 (a flag to enable UTF16 instead of ACP)
         -q (a flag to turn off row output)
         -NoFetch (a flag to disable data fetch with query execution)
         -NotForwardCursor (a flag to disable forwardonly cursor)
         -v (a flag to display the version)
         -SessionVar <SessionVarName>=<SessionVarVAlue>

To use NQCMD we need to pass a logical query SQL in a file.
The logical query file can look something like

SET VARIABLE PRODUCT_NAME='T00103', DISABLE_PLAN_CACHE_HIT=1, DISABLE_CACHE_HIT=1, LOGLEVEL=2, QUERY_NAME='Q0213EB1783381CBEC92AB1166D5FF8FC', TEST=1:SELECT .................. FETCH FIRST 25001 ROWS ONLY

Now we have looked at the two parts we'll use let's put it all together.

To start we need to mine the usage tracking table for the logical queries of the reports on our dashboards that we wish to test.
Each query should be in its own file.
Ideally each query should be uniquely identifiable, so we can trace a query back to a relevant report or dashboard and also as a means of identification when running the test.

Generating the test logical query input files: I have knocked up a SQL script that extracts the queries from the Usage Tracking Table and saves them in a series of files. Because PL/SQL cannot spool to multiple files dynamically, the following script generates an intermediate file, which is then called and executed.

SET TERMOUT OFF
SET SERVEROUTPUT ON
SET ECHO OFF
SET DEFINE OFF
SET FEEDBACK OFF
SET LINESIZE 5000
SPOOL intermediate_file.sql

DECLARE
  CURSOR cr_qrys  IS
      SELECT DISTINCT TO_CHAR (query_blob) AS query_text, query_key
        FROM s_nq_acct
       WHERE     query_src_cd = 'Report'
             AND dbms_lob.getlength (query_blob) <= 3900
             AND ERROR_TEXT IS NULL
             AND row_count > 0
    ORDER BY query_key;

  i          NUMBER := 0;
  filename   VARCHAR2 (5);
  p1         VARCHAR2 (5000);
BEGIN
  dbms_output.enable (NULL);

  FOR lr_qrys IN cr_qrys LOOP
    i          := i + 1;
    filename   := LPAD (TO_CHAR (i), 5, '0');
    p1         :=
         'SET VARIABLE PRODUCT_NAME=''T'
      || filename
      || ''', DISABLE_PLAN_CACHE_HIT=1, DISABLE_CACHE_HIT=1, LOGLEVEL=2, QUERY_NAME=''Q'
      || lr_qrys.query_key
      || ''', TEST=1:'
      || lr_qrys.query_text;
    dbms_output.put_line ('SET COLSEP ,  
SET PAGESIZE 0
SET TRIMSPOOL ON 
SET HEADSEP OFF
SET FEEDBACK OFF
SET DEFINE OFF
SET ECHO OFF
SET TERMOUT OFF
SET LINESIZE 5000
SPOOL test_file_' || filename || '.inp
SELECT q''[' || p1 || ']''
FROM DUAL;
SPOOL OFF');
  END LOOP;
END;
/

SPOOL OFF
PROMPT     executing intermediate file
@intermediate_file.sql;
SET SERVEROUTPUT OFF

This generates a series of files with the following structure test_file_xxxxx.inp where the xxxxx matches the test PRODUCT_NAME variable of the query.

This is our the first version of our baseline series of tests. As new dashboards and reports are released you will need to update the test input files with the new report queries and potentially remove those reports that are no longer relevant.

The next step is to pass each file through the NQCMD in turn and capture the output. Assuming the logical query test input files are in a subdirectory called testcases (and using DOS) we can loop through the .inp files as follows

for /r %%i in (testcases\*) do (
<mw_home>\Oracle_BI1\bifoundation\server\bin\nqcmd.exe -d <dsn to repository> -u <username> -p <password> -s %%i -T -q >%%~ni.out
)

Once the above loop has finished we will have a series of files called test_file_xxxxx.out

Peering into an output file you will see they have a structure as detailed below


-------------------------------------------------------------------------------
          Oracle BI ODBC Client
          Copyright (c) 1997-2011 Oracle Corporation, All rights reserved
-------------------------------------------------------------------------------


SET VARIABLE PRODUCT_NAME='T03818', DISABLE_PLAN_CACHE_HIT=1, DISABLE_CACHE_HIT=1, LOGLEVEL=2, QUERY_NAME='Q4DFD2F426D93BC2DA37036577533BF8B', TEST=1:SELECT ........... ASC NULLS LAST, 3 ASC NULLS LAST FETCH FIRST 200001 ROWS ONLY

SET VARIABLE PRODUCT_NAME='T03818', DISABLE_PLAN_CACHE_HIT=1, DISABLE_CACHE_HIT=1, LOGLEVEL=2, QUERY_NAME='Q4DFD2F426D93BC2DA37036577533BF8B', TEST=1:SELECT .............................

--------------------------------------------------------------------------------------------------------------------------------------------------------
s_0          s_1                                                                                                  s_2          3 s_4                    
--------------------------------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------------------------------------------------------------------------------
Clock time: batch start: 10:34:21.000 Query from: 10:34:22.000 to: 10:35:22.000 Row count: 15
 total: 59.920 prepare: 0.016 execute: 59.889 fetch: 0.015
Cumulative time(seconds): Batch elapsed: 60 Query total: 59.920 prepare: 0.016, execute: 59.889, fetch: 0.015, query count:  1, cumulative rows:  15

--------------------------------------------------------------------------------------------------------------------------------------------------------


Processed: 1 queries

You can see from the contents we have a wealth of information and using a couple of scripts you can extract the values for PRODUCT_NAME and the timings near the end of each file. When you process them together you should have something like

Date Test No Batch Start Query From Query To Row Count Total Time Prepare Time Execute Time Fetch Time Batch Elapsed Prepare Query Total Fetch Query Count Cumulative rows
21/01/2014 'T00005' 15:24:08 15:24:11 15:24:17 2 5.906 0.016 5.859 0.031 8 0.016 5.859 0.031 1 2
21/01/2014 'T00007' 15:24:21 15:24:22 15:24:23 9 0.593 0 0.578 0.015 1 0 0.578 0.015 1 9
21/01/2014 'T00008' 15:24:24 15:24:25 15:24:38 35 12.671 0.031 12.624 0.016 13 0.031 12.624 0.016 1 35

Which you can load into your favourite database and perhaps even build a dashboard on.





Monday, 1 July 2013

OBIEE - Testing the Repository Using NQCMD

You have finished making your changes to the repository and the consistency check reports no errors or warnings. Great, it's all working as planned. Unfortunately all that the consistency checker does, is make sure that logically the repository hangs together, not that your business logic is correct. A consistency check means that most common problems with the construction of the physical layer and the facts and dimensions.

Now to test that the outputs are as expected. (see also Generating an OBIEE performance Test Harness)

NQCMD

We can automate this using nqcmd and sample reports.

Before starting NQCMD run bi-init (found in ORACLE_INSTANCE/bifoundation/OracleBIApplication/coreapplication/setup) to give a properly configured command line environment to run the OBI commands.

A look at the help for the command reveals a load of options:

C:\obiee11g\Oracle_BI1\bifoundation\server\bin>nqcmd -?

-------------------------------------------------------------------------------
          Oracle BI ODBC Client
          Copyright (c) 1997-2013 Oracle Corporation, All rights reserved
-------------------------------------------------------------------------------


Argument error near: -?
Command: nqcmd - a command line client which can issue SQL statements
                 against either Oracle BI server or a variety
                 of ODBC compliant backend databases.
SYNOPSIS
         nqcmd [OPTION]...
DESCRIPTION
         -d<data source name>
         -u<user name>
         -p<password>
         -s<sql input file name>
         -o<output result file name>
         -D<Delimiter>
         -C<# number of fetched rows by column-wise binding>
         -R<# number of fetched rows by row-wise binding>
         -a (a flag to enable async processing)
         -f (a flag to enable to flush output file for each write)
         -H (a flag to enable to open/close a request handle for each query)
         -z (a flag to enable UTF8 in the output result file
         -utf16 (a flag to enable UTF16 for communicating to Oracle BI ODBC driver)
         -q (a flag to turn off row output)
         -NoFetch (a flag to disable data fetch with query execution)
         -NotForwardCursor (a flag to disable forwardonly cursor)
         -v (a flag to display the version)
         -ONFormat<FormatString, i.e. TM9, 0D99>

Before we go further go to answers and build a test query against the part of the repository you wish to test, in the advanced tab copy out the query and save this to a file. This will be the SQL file used in the test.

Run the following, your connection data source name will be different, as hopefully will your username and password.

nqcmd -d coreapplication_OH206444865 -u weblogic -p weblogic -s C:\Test\test.sql -o C:\Test\test.txtv -D; -utf16

Note that the -D is the delimiter between SQL statements in the input file if more than one select statement is in there.

Your output file should contain the query and end something like
....
....
ORDER BY 1, 2 ASC NULLS LAST
FETCH FIRST 250001 ROWS ONLY

-------------------------
s_0          s_1         
-------------------------
0            00/000008/  
-------------------------
Row count: 1
-------------------------


Processed: 1 queries

If you set up a DSN to the other server you can run the same command line and only change the contents of the -d flag, then compare the results.

Always useful if you want to test that nothing is broken prior to releasing a change.

Update :
See here for an worked example of how to automate all this in Generating an OBIEE Performance Test Harness