Skip to main content

How to do Performance Tuning in OBIEE





 Performance Tuning Checklist 



1.Browser version, Network Connection, Memory, Proxy Server


2. Web/Chart Cache Settings


3.NQSConfig.IniSettings,QueryCache,PhysicalConnections,Metadata Configuration, Query Plan         

                            

4. Materialized Views, Fragments, Indexes, Aggregates.




Database Tuning


There are a number of options for tuning a database for OLAP reporting.

These concepts will not work well in an OLTP environment. 

 

Tuning Options

 

1) Denormalize data – Denormalizing data can reduce the number of

joins a query needs to perform.  Thus, reducing query response time.

 

2) Indexing


•Index all primary and foreign keys


•Index columns used frequently in drill-down or hierarchies


• Use B-Tree and Bitmap indexes where appropriate.

 


3) Partitioning – For example, if a large amount of your queries contain

a WHERE clause based on Year and Month, partitioning by year and

the month will dramatically increase response time.

 


4) Avoid using a large number of joins to satisfy typical queries.  Too

many joins may indicate an issue with the data model and you may

want to consider further denormalization.

 


5) When joining multiple data sources make sure each data source is

tuned appropriately as you will now have two data sources that can

cause performance problems.  In addition, the network connectivity

between each data, source could become an issue.

 

6) Create aggregate tables when necessary, not as the first step of

query performance.  Adding aggregates for the sake of using

aggregates will increase the window necessary to refresh the

database.


Repository Tuning 


 There are a number of options to ensuring performance is not hindered

by the OBIEE repository.

 

Tuning Options

 

Physical Layer

 

1) Foreign Key joins – use this join type to include the joins that will be

used in the physical layer.  Joining every table to each other just

because you can might not make sense as you should only let users

build queries that answer relevant business questions.


 

2) Avoid using the „Select‟ object type as any database indexing and

partitioning will not be applied to any query using this table object.


 

3) Enable caching so that commonly used queries are not constantly

regenerated by the database.  Don‟t forget to enable caching in the

NQSConfig.ini file!

 


4) When joining multiple data sources make sure each data source is tuned appropriately as you will now have two data sources that cancause performance problems.  In addition, the network connectivity

between each data, source could become an issue









5) Use the appropriate call interface for the data source.  Not using the

most efficient call interface could change the format of the SQL that

is sent to the database.

 

6) Increase the maximum number of connections allowed to a database

at one time.

 

7) Review the Database…Features tab to ensure all features for your

database are checked appropriately.  These values will directly

impact the physical SQL that is sent to the database.

 

8) Make sure you join your dimension tables to your fact tables and not

your fact tables to your dimension tables.








Business Layer


1) In the Business Model Diagram, use Complex Joins so you avoid having conflicting joins between the Business Model Layer and the Physical Layer.


2) Separate facts into Logical Tables as this will help reduce development time.  Each time an object is checked out from a Logical Table, every logical column within that Logical Table will need to be processed through the consistency check.  If you put all of your metrics into the same Logical Table your consistency check will take several minutes.


3) CASE or IF statements on dimensional logical columns should be pushed to the database level.


4) Creating a Dimension (hierarchy) for each dimension Logical Table and assign the appropriate elements at each level of the hierarchy.  This will help the Analytics Server to construct SQL utilizing the most efficient path to satisfy the request.


5) Assign each Fact logical table source to the appropriate Dimension level. This to will help the tool use the appropriate aggregate tables.







Comments

Popular posts from this blog

Contact Me

Do You have any queries ?                   If you are having any query or wishing to get any type of help related Datawarehouse, OBIEE, OBIA, OAC then please e-email on below. I will reply to your email within 24 hrs. If I didn’t reply to you within 24 Hrs., Please be patience, I must be busy in some work. kashif7222@gmail.com

Top 130 SQL Interview Questions And Answers

1. Display the dept information from department table.   Select   *   from   dept; 2. Display the details of all employees   Select * from emp; 3. Display the name and job for all employees    Select ename ,job from emp; 4. Display name and salary for all employees.   Select ename   , sal   from emp;   5. Display employee number and total salary   for each employee. Select empno, sal+comm from emp; 6. Display employee name and annual salary for all employees.   Select empno,empname,12*sal+nvl(comm,0) annualsal from emp; 7. Display the names of all employees who are working in department number 10   Select ename from emp where deptno=10; 8. Display the names of all employees working as   clerks and drawing a salary more than 3000   Select ename from emp where job=’clerk’and sal>3000; 9. Display employee number and names for employees who earn commission   Select empno,ename from emp where comm is not null and comm>0. 10

Informatica sample project

Informatica sample project - 1 CareFirst – Blue Cross Blue Shield, Maryland (April 2009 – Current) Senior ETL Developer/Lead Model Office DWH Implementation (April 2009 – Current) CareFirst Blue Cross Blue Shield is one of the leading health care insurance provided in Atlantic region of United States covering Maryland, Delaware and Washington DC. Model Office project was built to create data warehouse for multiple subject areas including Members, Claims, and Revenue etc. The project was to provide data into EDM and to third party vendor (Verisk) to develop cubes based on data provided into EDM. I was responsible for analyzing source systems data, designing and developing ETL mappings. I was also responsible for coordinating testing with analysts and users. Responsibilities: ·          Interacted with Data Modelers and Business Analysts to understand the requirements and the impact of the ETL on the business. ·          Understood the requirement and develope