✍ Kashif M
Trends

Sunday, 26 January 2014

SQL query overwrite in source qualifier – Informatica










You can alter or override the default query in the mapping by entering SQL override in the Properties settings in the Sources qualifier. You can enter any SQL statement supported by the source database.
Overriding the SQL Query

You can override the SQL query for a relational source.

To override the default query for a relational source:

1. In the Workflow Manager, open the session properties.

2. Click the Mapping tab and open the Transformations view.

3. Click the Sources node and open the Properties settings.

4. Click the Open button in the SQL Query field to open the SQL Editor.

5. Enter the SQL override.

6. Click OK to return to the session properties.

Target update override – Informatica



By default, the Integration Service updates target tables based on key values. However, you can override the default UPDATE statement for each target in a mapping. You might want to update the target based on non-key columns.

When the Integration Service executes SQL against a source, target, or lookup database, it searches the reserved words file stored in the Integration Service installation directory. It encloses matching reserved words in quotes. If you use target update override, you must manually put all reserved words in quotes.

For a mapping without an Update Strategy transformation or a Custom transformation with the update strategy property enabled, configure the session to mark source rows as update. The Target Update option only affects source rows marked as update. The Integration Service processes all rows marked as insert, delete, or reject normally. When you configure the session, mark source rows as data-driven. The Target Update Override only affects source rows marked as update by the Update Strategy or Custom transformation.

For example, a mapping passes the total sales for each salesperson to the T_SALES table.

The Designer generates the following default UPDATE statement for the target T_SALES: UPDATE T_SALES SET EMP_NAME = :TU.EMP_NAME, DATE_SHIPPED = :TU.DATE_SHIPPED, TOTAL_SALES = :TU.TOTAL_SALES WHERE EMP_ID = :TU.EMP_ID


Because the target ports must match the target column names, the update statement includes the keyword:TU to specify the ports in the target transformation. If you modify the UPDATE portion of the statement, be sure to use :TU to specify ports.
Overriding the WHERE Clause

You can override the WHERE clause to include non-key columns. For example, you might want to update records for employees named Mike Smith only. To do this, you edit the WHERE clause as follows:

UPDATE T_SALES SET DATE_SHIPPED = :TU.DATE_SHIPPED,



TOTAL_SALES = :TU.TOTAL_SALES WHERE :TU.EMP_NAME = EMP_NAME and



EMP_NAME = 'MIKE SMITH'
Rules and Guidelines for Configuring the Target Update Override

Use the following rules and guidelines when you enter target update queries:


If you use target update override, you must manually put all database reserved words in quotes.



You cannot override the default UPDATE statement if the target column name contains any of the following characters: ' , ( ) < > = + - * / \ t \ n \ 0 <space>


You can use parameters and variables in the target update query. Use any parameter or variable type that you can define in the parameter file. You can enter a parameter or variable within the UPDATE statement, or you can use a parameter or variable as the update query. For example, you can enter a session parameter, $ParamMyOverride, as the update query, and set $ParamMyOverride to the UPDATE statement in a parameter file.


When you save a mapping, the Designer verifies that you have referenced valid port names. It does not validate the SQL.


If you update an individual row in the target table more than once, the database only has data from the last update. If the mapping does not define an order for the result data, different runs of the mapping on identical input data may result in different data in the target table.


A WHERE clause that does not contain any column references updates all rows in the target table, or no rows in the target table, depending on the WHERE clause and the data from the mapping. For example, the following query sets the EMP_NAME to “MIKE SMITH” for all rows in the target table if any row of the transformation has EMP_ID > 100: UPDATE T_SALES set EMP_NAME = 'MIKE SMITH' WHERE :TU.EMP_ID > 100


If the WHERE clause contains no port references, the mapping updates the same set of rows for each row of the mapping. For example, the following query updates all employees with EMP_ID > 100 to have the EMP_NAME from the last row in the mapping: UPDATE T_SALES set EMP_NAME = :TU.EMP_NAME WHERE EMP_ID > 100


If the mapping includes an Update Strategy or Custom transformation, the Target Update statement only affects records marked for update.



1. Double-click the title bar of a target instance.

2. Click Properties. Click the Open button in the Update Override field.

3. The SQL Editor displays. Select Generate SQL.

4. The default UPDATE statement appears. Modify the update statement.

5. You can override the WHERE clause to include non-key columns.

6. Enclose all reserved words in quotes.

7. Click OK.The Designer validates the SQL when you save the mapping.


If you use the Target Update option, configure the session to mark all source records as update.

Tuning lookup transformation – Informatica




If the lookup table is on the same database as the source table in your mapping and caching is not feasible, join the tables in the source database rather than using a Lookup transformation.

If you use a Lookup transformation, perform the following tasks to increase performance:

· Use the optimal database driver.

· Cache lookup tables.

· Optimize the lookup condition. (<,>,<=,>=,!=)

· Filter lookup rows.

· Index the lookup table.

· Optimize multiple lookups.


Using Optimal Database Drivers

The Integration Service can connect to a lookup table using a native database driver or an ODBC driver. Native database drivers provide better session performance than ODBC drivers.
Caching Lookup Tables

If a mapping contains Lookup transformations, you might want to enable lookup caching. When you enable caching, the Integration Service caches the lookup table and queries the lookup cache during the session. When this option is not enabled, the Integration Service queries the lookup table on a row-by-row basis.

The result of the Lookup query and processing is the same, whether or not you cache the lookup table. However, using a lookup cache can increase session performance for smaller lookup tables. In general, you want to cache lookup tables that need less than 300 MB.

Complete the following tasks to further enhance performance for Lookup transformations:


· Use the appropriate cache type.

· Enable concurrent caches.

· Optimize Lookup condition matching.

· Reduce the number of cached rows.

· Override the ORDER BY statement.

· Use a machine with more memory.


Optimizing the Lookup Condition

If you include more than one lookup condition, place the conditions in the following order to optimize lookup performance:

Filtering Lookup Rows

Create a filter condition to reduce the number of lookup rows retrieved from the source when the lookup cache is built.

Indexing the Lookup Table

The Integration Service needs to query, sort, and compare values in the lookup condition columns. The index needs to include every column used in a lookup condition.

You can improve performance for the following types of lookups:


Cached lookups. To improve performance, index the columns in the lookup ORDER BY statement. The session log contains the ORDER BY statement.


Uncached lookups. To improve performance, index the columns in the lookup condition. The Integration Service issues a SELECT statement for each row that passes into the Lookup transformation.

Optimizing Multiple Lookups

If a mapping contains multiple lookups, even with caching enabled and enough heap memory, the lookups can slow performance. Tune the Lookup transformations that query the largest amounts of data to improve overall performance.

To determine which Lookup transformations process the most data, examine the Lookup_rowsinlookupcache counters for each Lookup transformation. The Lookup transformations that have a large number in this counter might benefit from tuning their lookup expressions. If those expressions can be optimized, session performance improves.

Pmcmd command usage in Informatica





19. Pmcmd command usage in Informatica.

Pmcmd command in Informatica is used to run a workflow in command prompt.


18. How to load rows into fact table in data warehouse

A fact table is centralized table in data modeling which has foreign keys and measures. Fact table is surrounded by multiple dimension table.so we have to get primary keys of all dimension tables and the measures to load into fact.

 
20. Difference between stop and abort in Informatica

Stop- will stop the task

Abort – will force stop if the task does not stop by stop process.

Pushdown optimization and types in Informatica



The types of pushdown optimization:

Source-side pushdown optimization. The Integration Service pushes as much transformation logic as possible to the source database.

Target-side pushdown optimization. The Integration Service pushes as much transformation logic as possible to the target database.

Full pushdown optimization. The Integration Service attempts to push all transformation logic to the target database. If the Integration Service cannot push all transformation logic to the database, it performs both source-side and target-side pushdown optimization.

Running Source-Side Pushdown Optimization Sessions

When you run a session configured for source-side pushdown optimization, the Integration Service analyzes the mapping from the source to the target or until it reaches a downstream transformation it cannot push to the source database.

The Integration Service generates and executes a SELECT statement based on the transformation logic for each transformation it can push to the database. Then, it reads the results of this SQL query and processes the remaining transformations.

Running Target-Side Pushdown Optimization Sessions

When you run a session configured for target-side pushdown optimization, the Integration Service analyzes the mapping from the target to the source or until it reaches an upstream transformation it cannot push to the target database. It generates an INSERT, DELETE, or UPDATE statement based on the transformation logic for each transformation it can push to the target database. The Integration Service processes the transformation logic up to the point that it can push the transformation logic to the database. Then, it executes the generated SQL on the target database.

Running Full Pushdown Optimization Sessions

To use full pushdown optimization, the source and target databases must be in the same relational database management system. When you run a session configured for full pushdown optimization, the Integration Service analyzes the mapping from the source to the target or until it reaches a downstream transformation it cannot push to the target database. It generates and executes SQL statements against the source or target based on the transformation logic it can push to the database.

When you run a session with large quantities of data and full pushdown optimization, the database server must run a long transaction. Consider the following database performance issues when you generate a long transaction:

A long transaction uses more database resources.

A long transaction locks the database for longer periods of time. This reduces database concurrency and increases the likelihood of deadlock.

A long transaction increases the likelihood of an unexpected event.

To minimize database performance issues for long transactions, consider using source-side or target-side pushdown optimization.

Integration Service Behavior with Full Optimization

When you configure a session for full optimization, the Integration Service analyzes the mapping from the source to the target or until it reaches a downstream transformation it cannot push to the target database. If the Integration Service cannot push all transformation logic to the target database, it tries to push all transformation logic to the source database. If it cannot push all transformation logic to the source or target, the Integration Service pushes as much transformation logic to the source database, processes intermediate transformations that it cannot push to any database, and then pushes the remaining transformation logic to the target database. The Integration Service generates and executes an INSERT SELECT, DELETE, or UPDATE statement for each database to which it pushes transformation logic.

The Rank transformation cannot be pushed to the source or target database. If you configure the session for full pushdown optimization, the Integration Service pushes the Source Qualifier transformation and the Aggregator transformation to the source, processes the Rank transformation, and pushes the Expression transformation and target to the target database. The Integration Service does not fail the session if it can push only part of the transformation logic to the database.

Pushdown optimization $pushdownconfig parameter – Informatica


Depending on the database workload, you might want to use source-side, target-side, or full pushdown optimization at different times. For example, use source-side or target-side pushdown optimization during the peak hours of the day, but use full pushdown optimization from midnight until 2 a.m. when database activity is low.

To use different pushdown optimization configurations at different times, use the $$PushdownConfig mapping parameter. The parameter lets you run a session using the different types of pushdown optimization. The settings in the $$PushdownConfig parameter override the pushdown optimization settings in the session properties.

Complete the following steps to configure the mapping parameter:

1.    Create $$PushdownConfig in the Mapping Designer.

2.    When you add the $$PushdownConfig mapping parameter in the Mapping Designer, use the following values:

Field
Value
Name
$$PushdownConfig
Type
Parameter
Datatype
String
Precision or Scale
20
Aggregation
n/a
Initial Value
None
Description
Optional

3.    When you configure the session, select $$PushdownConfig for the Pushdown Optimization attribute.

4.    Define the parameter in the parameter file




Enter one of the following values for $$PushdownConfig in the parameter file:

Value
Description
None
Integration Service processes all transformation logic for the session.
Source [Seq View Conn]
Integration Service pushes as much of the transformation logic to the source database as possible.
Target [Seq View Conn]
Integration Service pushes as much of the transformation logic to the target database as possible.
Full [Seq View Conn]
Integration Service pushes as much of the transformation logic to the source and target databases as possible. The Integration Service processes any transformation logic that it cannot push to a database.


Optionally, specify one or more of the following options:

Seq. Allows the Integration Service to create a sequence object in the database.

View. Allows the Integration Service to create a view object in the database.

Conn. Indicates that the database user of the active database has read permission on the idle database, which is required to push transformation logic to the active database.

For example, enter ‘Full View Conn’ to use full pushdown optimization, enable the creation of view objects in the active database, and indicate that the active database has read permission on the idle database.

How to create user defined function (udf) in Informatica



User-defined functions are extensions to PowerCenter transformation language. It allows you to create complex expressions using the built-in functions and operators and reuse them in PowerCenter mappings. User-defined functions are two types.

Public : Callable from any user-defined function, transformation expression, link condition expression, or task expression.

Private : Callable from another user-defined function. Create a private function when you want the function to be part of a more complex function. The simple function may not be usable independently of the complex function.

Creating User Defined Functions

You can create a user defined function from Informatica PowerCenter Designer tool. You can invoke the User-Defined Function dialog box from main menu Tools > User-Defined Functions > New. If u want a user defined function to convert the phone number into (XXX) XXX-XXXX format.

We can define the function here to convert PHONE_NUMBER into required string formatted in (XXX) XXX-XXXX. The expression editor shows the logic used within the user defined function. User defined functions can be used in any expressions just like any other functions. You can choose the available user defined functions from the functions explorer.


Load alternative records / rows into multiple targets – Informatica





For passing alternative rows in to two targets the flow of the mapping is



SQ->sequence generator ->Router->TGT1,TGT2



Router filter condition is ‘(mod (sno, 2) =0)’ give u the even number rows pass it to one target (TGT1) and the default port of the router gives u odd number rows pass it to TGT2.

Load last n records of file into target table – Informatica




SQ->SG->AGG->ROUTER->TGT



AGG-> MAX(SNO)

ROUTER->MAX(SNO)-N The default port in router will give u the last n records.

Deployment groups in Informatica




Deployment group is a global object which consists of objects from one or more folders. Deployment group is used to copy objects from one or more folders to another folder or repository. You can create, edit, or delete deployment groups. You can copy a deployment group and the objects in the deployment group to a target repository.

Deployment Group Tasks:

You can do the following tasks when working with deployment groups:



· Create deployment group: You can create a global object for deploying objects from one or more folders.

· Edit deployment group: You can modify a deployment group. You can change the static deployment group to dynamic deployment group and vice versa.

· Configure privileges and permissions: Configure permissions on a deployment group and the privilege to copy a deployment group.

· View the objects in a static or dynamic deployment group: You can preview the objects that the Repository Service will deploy.

· Add or remove objects in a static deployment group: You can specify what objects belong to a static deployment group. You can add or remove the objects from the deployment group.

· Associate a query with a dynamic deployment group: Assign a query to a deployment to dynamically update the objects that the group contains.

· View the history of a deployment group: View the history of a deployment group, including the source and target repositories, deployment date, and user who ran the deployment.

· Post-deployment validation: Validate the objects in the target repository after you copy a deployment group to verify that the objects and dependent objects are valid.

· Roll back a deployment group: You can roll back a deployment group to purge deployed versions of objects from the target repository.

Convert multiple rows to single row (multiple columns) in Informatica






          We take for eg:
Source

Sid
code
value
1
A
ABC
1
B
BCD
1
C
CCD
2
A
SDC
2
C
ASD
3
D
DFG
           
            Target
Sid
code A
code B
code C
code D
1
abc
bcd
ccd
<null>
2
sdc
<null>
asd
<null>
3
<null>
<null>
<null>
dfg
                       





The flow of mapping is

Sq->Agg->Tgt1, tg2, tgt3…
Use Aggregator
Create Four Ports each having
MAX (DECODE (CODE,'A', VALUE)),
MAX (DECODE (CODE,'B', VALUE)),
MAX (DECODE (CODE,'A', VALUE)),
MAX (DECODE (CODE,'D', VALUE))
Group it by SID
 Connect those Four Columns to respective Target Columns

Blog Archive

Labels

1z0-482 Dumps (2) 1Z0-525 Dumps (1) About Kashif (1) BI Apps Installation (1) BI Publisher (2) BI Publisher Interview Questions (1) BICS (2) Business Intelligence (1) DATA WAREHOUSE ADMINISTRATOR CONSOLE (18) Data Warehousing (48) E-BIZ R12 (1) E-BIZ R12 INSTALLATION (1) Essbase (1) hmailserver (1) Hyperion (1) Hyperion Essbase (2) Hyperion Essbase Interview Questions (1) Hyperion financial management (1) Informatica (163) Informatica Installation 9.6.1 (2) Informatica Interview Question (47) Informatica Online Training (1) informatica scenarios questions (1) Informatica Training (1) Informatica Training Chennai (1) JAVA (1) linux (1) mkashu (1) Normalization in Oracle (1) OBIA 11.1.1.7.1 (1) OBIA 7.9.6.3 (3) OBIA Installation (33) OBIEE (2) OBIEE 10G (8) OBIEE 11G (138) OBIEE 11g dumps (2) OBIEE 11g Interview Questions (1) OBIEE 12c (20) OBIEE 12c Architecture (1) OBIEE 12C TRAINING (1) OBIEE certification (2) OBIEE Interview Question (1) OBIEE Publisher Interview Questions (1) OBIEE TRAINING CHENNAI (1) OBIEE TRAINING ONLINE (1) ODI (Oracle data integrator) (2) odi 11g certification (1) ODI dumps (2) Oracle Administrator (20) Oracle Business Intelligence Application (3) Oracle Data Integrator (2) Oracle Data Integrator Interview questions for experience (1) Oracle Database (3) Oracle Database 12c Installation steps (3) Oracle Database Installation in Linux 6 (2) Oracle E-Business R 12.3 (1) Oracle Hyperion Planning and Budgeting Questionnaire (2) oracle Joins (1) Power BI (5) RCU INSTALLATION (1) scorecard in obiee 11g (1) SQL (3) SQL Interview Questions (1) sql server 2008 (2) Sql server installation (1) WinSCP (1)