Skip to main content

Cache Management in OBIEE 11g


1)    In OBIEE we have two types of cache

1.    Oracle BI Presentation Server Cache

2.    Oracle Bi Server Cache 


1.  Oracle BI Presentation Server Cache

1)    It is a temporary cache , it will be stored in below location C:\OBI11g.6\instances\instance2\tmp\OracleBIPresentationServicesComponent\

coreapplication_obips1\sawrptcache

2)    It will be unique per particular user and session

3)     OBIEE we have two types of cache

2.  Oracle BI Server Cache

4)    In OBIEE 11g cache parameters are managed using EM.

5)    EM initially will change nqsocnfig.INI file.

6)    In Nqsconfig file enable = ‘yes’ then cache is enabled else cache is disabled.

7)    By default cache files will be stored in below path

8)    E:/bi11g/instances/instance1/bifoundation/oracleBIservercomponenets/coreapplic

ation-obis1/cache.


Advantage of cache:

1)    Less network traffic.

2)    Huge performance improvement. 


Disadvantage of cache: 


1)    There is a chance to get stale data.

2)    Hard disk memory is wasted.

3)    Maintenance of cache by admin.

Cache architecture:



Cache Management in OBIEE 11g





Report logical sql will be compared with cache database logical sqls, if anything is matching then corresponding cache output will be shown to the user else it will execute from the db.

Enabling cache:


1)    Open EM --> capacity management --> performance --> lock & edit --> select cache enabled --> apply --> activate changes --> close --> restart oracle BI server. 

2)    Go to NQSconfig file & notice that enable =’yes’. Run any one report & go to the cache location & observe a file is created. 

3)    Open RPD in online mode --> go to manage --> cache --> observe all the parameters. 

4)    Sign out from analysis --> sign in as abell --> password: Admin123 --> run same above report in the RPD --> manage --> cache --> notice that use count is updated

to ‘1’.


Stale data:

Old data is called as stale data.

A report ran from DB & stored that result into cache & the DB is updated with new values then we run the same above report we will get output from cache that means old data --> to eliminate this problem we need to do purging followed by seeding.

Purging:

Cleaning cache files is called as purging.

We have four types of  purging.

1.
Manual.

2.
Persistency time.

3.
Event pooling table.


4.  ODBC functions.

Manual:

After enabling cache -->run any one report. Then cache will be created -->open RPD in online mode -->manage --> cache --> right click on the entry --> click on purge --> ok close.


Persistency:

It is useful to purge cache based on fixed time. To provide this setting we need to know accurately the table updated frequency.

Process:

Open RPD in online mode --> double click on D1_products --> go to general tab --> select cache persistence time --> type 2 mins --> ok --> check in --> save --> reload server metadata --> develop a report with specific columns & notice that cache is created --> after 2 mins automatically cache will be purged.

Note: we can control cache at table level in physical layer double click on any table --> general tab enable/disable cache. Revert to cacheable --> cache never expires.

Event pooling table:


Step1: creating event pooling table:

Login to DB as supplier2 user create table s-nq-ept as select * from dev-bi plat form s-nq-ept.

‘Select * from s-nq-ept’.

Step2: creating a triggers:

To monitor D1-products create (or) replace trigger EVENT-TABLE-UPDATE-TRIGGER after insert or update or delete on D1-products for each row.

Begin

Insert into s-nq-ept (update type, update-TS, table-name) values (‘sysdate’,’D1-products’);

End;

/

Confirm trigger is working or not by executing below statement.

Update d1-products set genericdescription =’potato chip’ where genericdescription =’potato chips’

Select * from s-nq-ept.

Hence trigger is working.

Configuring EPT table:

In physical layer right click on supplier2 --> connection pool --> import metadata --> import s-nq-ept table --> finish.

Tools menu --> utilities --> select oracle BI event tables --> execute --> select s-nq-ept --> provide pooling frequency as 2 mins --> ok --> check in changes --> ok.

Testing:

Develop a report with specific description & make sure cache is created.

Update some data in D1-products, then after 2 mins(pooling frequencies) --> cache will be cleaned as well s-nq-ept table will be truncated.

Odbc functions:

We have four types of ODBC functions, those are useful to purge cache programmatically & those functions are

1.    SA purge all cache.

2.    SA purge cache by db.

3.    SA purge cache by table.

4.    SA purge cache by query.

It is a accurate method to purge the cache. Normally these functions will be executed by ET-team using their post load option.

Process for understanding:

Develop a report --> make sure cache is created --> administration --> under maintenance & trouble shooting --> issue sql --> type call SA purge all cache --> issue sql --> go & see in cache folder. --> cache is removed.

Seeding:

Inserting cache into cache folder is called as seeding.

Seedings are two types.

1.    Manual seeding. 


2.    By using schedule. 


1. Manual seeding:


Navigate to the report in the catalog --> click on that report --> then automatically cache will be created.

2. By using schedule:

Log in to analytics --> develop a report with vs dollars --> save --> name: seeding report --> new --> agent --> schedule --> frequency : once -->data & type 2 min. forward to current time. Click on delay content --> select content as analysis --> browse --> select seeding report --> click on destinations --> oracle BI server cache --> save --> name: seed agent --> after 2 mins go to cache folder & notice that a cache file is available.

Comments

  1. Awesome Bro..Finally i cleared my all doubts..

    ReplyDelete
  2. Hi,
    I have made small changes in moth prompt in analytics page and ran the report. I got the physical query from cache. Without using “SAPurgeAllCache()” how/From where to get the physical query for the modified one. Can u please help with a solution

    ReplyDelete
  3. my ques is nqsconfig file is enable = yes.and already cache is created in folder.when an ran a report cache cannot? give me suggestion

    ReplyDelete
  4. how to purge cache from report level?

    ReplyDelete
    Replies
    1. To purge cache at report level, Go to Advance tab. In the prefix field, you can add Set variable disable_cache_hit=1;

      Delete
  5. how to purge cache from report level?

    ReplyDelete
  6. nice blog
    plz tell me
    detail info about aggregation and fragmentation.......

    ReplyDelete

Post a Comment

Popular posts from this blog

Contact Me

Do You have any queries ?                   If you are having any query or wishing to get any type of help related Datawarehouse, OBIEE, OBIA, OAC then please e-email on below. I will reply to your email within 24 hrs. If I didn’t reply to you within 24 Hrs., Please be patience, I must be busy in some work. kashif7222@gmail.com

Top 130 SQL Interview Questions And Answers

1. Display the dept information from department table.   Select   *   from   dept; 2. Display the details of all employees   Select * from emp; 3. Display the name and job for all employees    Select ename ,job from emp; 4. Display name and salary for all employees.   Select ename   , sal   from emp;   5. Display employee number and total salary   for each employee. Select empno, sal+comm from emp; 6. Display employee name and annual salary for all employees.   Select empno,empname,12*sal+nvl(comm,0) annualsal from emp; 7. Display the names of all employees who are working in department number 10   Select ename from emp where deptno=10; 8. Display the names of all employees working as   clerks and drawing a salary more than 3000   Select ename from emp where job=’clerk’and sal>3000; 9. Display employee number and names for employees who earn commission   Select empno,ename from emp where comm is not null and comm>0. 10

Informatica sample project

Informatica sample project - 1 CareFirst – Blue Cross Blue Shield, Maryland (April 2009 – Current) Senior ETL Developer/Lead Model Office DWH Implementation (April 2009 – Current) CareFirst Blue Cross Blue Shield is one of the leading health care insurance provided in Atlantic region of United States covering Maryland, Delaware and Washington DC. Model Office project was built to create data warehouse for multiple subject areas including Members, Claims, and Revenue etc. The project was to provide data into EDM and to third party vendor (Verisk) to develop cubes based on data provided into EDM. I was responsible for analyzing source systems data, designing and developing ETL mappings. I was also responsible for coordinating testing with analysts and users. Responsibilities: ·          Interacted with Data Modelers and Business Analysts to understand the requirements and the impact of the ETL on the business. ·          Understood the requirement and develope