MSBI Training In Hyderabad

ENQUIRE US NOW

MSBI Training In Hyderabad Ameerpet

Our reliable online companies provide placement focused and real-time MSBI training in various parts of Hyderabad. Our Microsoft business intelligence course comprises of basic to advanced level of services and with courses, which are designed to provide perfect placement in good MNC companies. Just after completion of our certified courses, you have the liberty to get acquainted with the best MSBI courses, ready to serve you with a proficient result. There are multiple MSBI projects available and with utmost knowledge, all ready for your help. Our professionals are known for designing the syllabus and content of this MSBI course based on the requirement of students. The primary aim is to help them achieve their goal in lives.

Classes with multiple training sources:

Our classes are available with multiple choices of training locations, across Hyderabad. Our online courses are well-equipped with various video tutorials and other study materials, focusing more towards MSBI training. As these are meant for both novices and experienced ones, therefore; the packages are likely to vary accordingly. Through these courses, people will get trained in the most promising manner and without even hampering any of their timing. As you are mainly dealing with online MSBI training in Hyderabad, therefore; the results will be on your side. You can check out the time first, and start focusing towards courses, accordingly.

Course content for you:

Depending on the various needs and demands of customers, there are some excellent course materials, happy to be your guiding star. We have plenty of packages, meant for flexible needs of clients. listed below, are some of the major modules and their sub-divisions while dealing with the online course studies.

Module 1 or introduction point:

  • Meaning of Microsoft business intelligence
  • Core concepts to be related to BI or the UDM
  • Example cube with the help of pivot table
  • MSBI is considered to be way more comprehensive when compared with Analysis services
  • Demonstrative platform of SQL report service with cube

Module 2 or OLAP model:

  • Model source schemes with snowflakes and stars
  • Understanding means of dimensional modeling with three types of dimensions
  • Know more about cube modeling and fact measurement
  • Various other forms of modeling, like data mining and more

Module 3 or SSAS as in BIDS:

  • Proficient development environment and create data source and views
  • Create cubes with UDM and Cube Build wizard
  • Refine dimensions and with measures as in BIDS

Module 4 or intermediate SSAS:

  • KPIs and Perspectives
  • Translations with currency localization and cube metadata
  • Actions with drill through, regular and reporting services

Module 5 or advanced form of SSAS:

  • Use of various fact tables and model intermediate fact tables
  • Model M:M dimensions with role playing and fact dimensions
  • Write back dimensions and with modeling changing dimensions
  • Proficient use of business intelligence wizards, as defined under semi-additive procedure, write back and time intelligence

Module 6 or cube storage along with aggregation:

  • Storage related topics like MOLAP and basic aggregations
  • Advanced design of storage
  • Partitions under analysis services and relational partitions
  • Customized aggregation design and processing design
  • Rapid changing dimensions or ROLAP dimensions
  • To real-time with proactive caching
  • Cube processing values

Defined as one of the best MSBI training institutes in Hyderabad, we have some other leading modules, all meant for the betterment of our aspiring students. All you need to do is just check out the available facilities first and start dealing with our MSBI training options, noted for your needs and demands.

Beginning MDX is another important point of focus while going through MSBI training courses. From the basic syntax to the proficient use of MDX query editor, the options are practically endless. Moreover, our companies will reply mainly on the services of SQL server management studio along with the new version of MDX function. Moreover, you will also get to know more about the everyday tasks and with the most used functionalities, waiting for you.

Know more about intermediate MDX:

Intermediate MDX is another important point of focus while dealing with MSBI online training in Hyderabad from our side. This platform solely deals with proficient add on of calculated members. Moreover, apart from adding scripts, this course will also help you to add named sets, right for the betterment of clients. .NET assembly is another important point, which you are likely to come across. On the other hand, last but not the least, people will also get to know more about the proper administrative level of SSAS, as another major turning point, in this course.

Notice of significant values:

Health monitoring is another important part of best practices, waiting for you to understand. The first and foremost option is related to XMLA scripting, a major part of SQL Mgmt studio. You will also get to know more about other fields of documentation methods. Security roles and security permissions, along with disaster recovery are some of the other rules, which are likely to be taught to the reliable and aspiring students. Clustering services with high availability are the last step, to be incorporated within this learning module or package.

Know more about data mining:

In case, you are a novice and want to avail MSBI online training for the first time, you need to be associated with data mining services. For making the courses easy for all to grasp, there are some reliable examples as related tox best of the 9 algorithms, such as MS decision trees, MS clustering and Naïve Bayes, as some of the major options. Moreover, Sequence clustering with MS association rules, MS Time series, and MS Neutral network are some of the value-added sources, for you. You can even try and opt for data mining dimensions and data mining clients, as some of the other noteworthy points.

In case, you are looking for processing mining models under MSBI institutes in Ameerpet, get the most promising result immediately. These companies is ready to offer you with the best guidance, and help you be an expert in this field of computer knowledge. The services are not just good to last long, but will help you to become an expert, too. Get acquainted with our courses right away, and let us gift you a perfect career.

MSBI Classroom Notes

MICROSOFT BUSSINESS INTELLIGENCE [MS-BI]

Opportunities:-

1. MSBI Suite Developer

2. SSIS Suite Developer

3. SSAS Suit Developer

4. SSRS Suit Developer

5. NET+ [SSIS (or) SSRS]

6. DWH [ETL (or) Reporting Tools] + MSBI

7. ETL Testing

8. SQL Server developer + MSBI

9. SQL server DBA + MSBI

10. MSBI FRESHERS.

MSBI Presence in the market
DATAWARE HOUSING:-

[sociallocker] It is an RDBMS which has a huge

 

Diagram

 

Volume of data in the support of Business decision

 

ETL:-

E = Extract (getting the data)

T = Transform (performing Intermediate operations)

Eg:-Currencies conversions etc.

L =Load (load to destinations).

Tools: -Informatica, Data stage, ABINITIO, OWB, BODI …etc.

Eg: -Analytical operations

 

Analysis Tools: - Creates Multi-Dimensional objects and provides multidimensional analysis

Eg:- i.e, Multidimensional operations.

Tools: -COGNOS, BO, HYPERION, MICROSTRATEGY, OBIEE..etc.

 

Reporting Tools: -To represent the data in an understandable format to the end users/ analysts

We require these tools. i.e, word, PDF, excel sheets …. Etc.

 

Tool: -COGNOS, BO, HYPERION, CRYSTAL REPORTS, PANORAMA, MICROSTRATEGY, OBIEE …etc.

 

NOTE:-

The information we received in DWH is in 2-dimentional data (i.e, rows and columns).FORMAT.

 

MSBI Competitive Advantages:-

 

1. It is a single site for the end-to-end business solution.

2. It has extended the capability for extraction, transformation, and loading.

3. Low-cost TCO [Total Cost of owner ship] - Easy to install, easy to use, Esay to maintain, Cheaper Price

4. Highly Scalable – It supports multiple instances to connect to the server without scarifying its performances.

 

 

 

*The BMI is said to be linear scalable application due to having CMS (Central Management Server) and PBM (Policy Based Management) servers ("2008R2").

 

Full Compatibility between the components of MSBI:-

 

*All components runs in the single runtime called ‘CLR' [Common Language Runtime].ie, it is having similar coding, Notation (naming).

*Full support to.Net, XML and web services    (universal language)

*MSBI provides very good support to the semi were house application (If we are reusing OLTP functionality in a ware house).

 

DATA Evaluation Stages

Eg: - The no of saving accounts opened for the year 2009-2010 every month wise and every location of

INDIA & SINGAPORE

Differences between 2000 &2005

20002005
DTS
(Data Transformation service)
File----file
Database -----Database
Database----file(DTS)
SSIS
(SQL server Integration services)
MSASSSAS[Analysis ]
             No Reporting          SSAS[Reporting services]
             No Notifications            SSNS[Notification services]

 

2. XML Data type services.

3. MDX [Multi Dimensional Extension] (or) Expressions.

 

DBA features                                    Developer features

DISK mirroring                                 Hosted CLR

ONLINE restoring                             Ado.Net Support

ONLINE indexing                              Web Services Support

Dedicated DBA connection               Advanced T-SQL etc

 

Disk Mirroring:-

Specially designed for Administrators, Disk level one & only recovery mechanism is RAID.

 

Online Restoring: -DB Operations, Restoring (Backup) can be done simultaneously [data base] in the same instance.

Online Indexing: -Indexes creations & DB operations perform simultaneously in some database

 

Business Intelligence features:-

1. SQL Server Integration services (SSIS)

2. SQL Server Analysis Services (SSAS)

3. SQL Server Reporting Service (SSRS)

4. SQL Server Notification Service (SSRS)

5. Proactive Caching

6. Report Builder.

7. Data mining.

8. MS-office full support.

 

 

Management Studio: -Till 2000

[Normal, IS, AS, RS, Mobile]

 

Hosted CLR:-

However .Net languages running under a single run time (CLR) MSBI run under CLR

Advantage:-

We can create tables (or) other objects in C-sharp obt Net (or) VC#.Net later we can import it inti SQL server DB.

*Debugging is simpler than solving in date base.

 

Advanced T-SQL:-

There are many keywords added in 2005 to satisfy the demands of BI people.

SSIS: -It is a High-end platform which performs ETL operations and administrative tasks.

SSAS: -It is an "MOLAP" tool to create, manipulate and provide multidimensional object and analysis.

SSRS: -IT represents the data in the understandable format i.e., word document, PDF…etc.

SSNS: -This is used to send (or) receive notifications based on the event of different applications, scripts

(or) components.

 

Proactive caching: -Used in analysis services to keep the cube and databases in a sink with each other (up to date).

 

Report Builder: -Designed specially to create Adhoc reports.

 

Data Mining: -It is knowledge analysis and discovery mechanism.

Regarding analysis, we get the answers from questions like what happened? And why it has happened? But in data mining we get the answer to the question as what will happen in the future?

 

Changes were done (or) Made to the existing B.I components:-

SSIS Level:-

Differences between 2005 &2008:-

1. Data, Time data type added (It supports 26 digits).

2. SPATIAL data types added.

3. GEO SPATIAL applications added.

4. Resourced Governor Implemented.

(It does governance of various resources in 2008).

5. CMS added =All the instances managed under a single manager server interface.

6. PBM=There are common policies implemented between different instances in CMS.

7. SSNS removed.

Hardware & Software requirements for MSBI installation Steps:-

H/W requirements:-

1. 1 Ghz Pentium-iii compatible (or) faster processor (2GHZ (or) faster recommended).

2.512MB of RAM (or) more (2GB or higher recommended).

3. 2.1GB free hard-disk space for SQL Server installation files and samples.

S/W requirements:-

1.A compatible operating system.

Many versions of Windows server and desktop OS, include Wing XP2 or later, winserver2003 (with SP2), Windows Vista and winserver2008.

*Click "Setup"(select first option from the window opened, i.e., New SQL Server stand-alone installation (or) added features

Click on the     required components

SSIS, AS, RS, BI Development Studio, management tools, etc..

*Instance configuration.

*Server configuration.

Specify the same account, the password for all the below services (or) different user id, the password for each service.

Services:-

1. SQL Service Agent

2. SQL Service Database Engine

3. SQL Service Analysis services

4. SQL Service Reporting services.

5. SQL Service Integration services10.0

 

Note:-

Before Installation process, we should have a valid user account and password.

*Database configuration

Authentication mode

Select mixed mode (SQL Server authentication & window authentication).

Built –in SQL Server system administrator account

Username:-

Password:-

*Reporting services configuration (Select "Install the native mode default configuration" Option)

*After by clicking the option it asks user id & password specify

Tools and Utilities required to work with MS-BI:-

a.BITS (SQL Server Business Intelligence Development Studio) :-

This Studio is required to work with IS, AS & RS applications

Navigation: -Star----Programs----SQL Server 2008----SSBIDS

b. SSMS (SQL Server Management Studio):-

This is useful to work with IS, AS, RS, SQL server mobile and regular databases (Managing and for giving queries on the databases, it is helpful).

Navigation:

Start—Programs—SQL server2008—SSMS

C. Command Line Utilities:-

 

 

Working with Management Studio:-

a. Open SSMS (SQL server Management Studio)

b. Select server type in the following

1. Database Engine—To  connect and work with a normal database.

2. Integration service –To connect and work with Integration service database

3. Analysis service—To connect and work with analysis services databases.

4. Reporting services – To connect and work with reporting services database.

5. SQL server compact.Edition –To connect and work with mobile applications.

6. Specify server name –Either IP Address(or) actual server name.

d. Select Authentication:-

1. Windows Authentication (consider windows credentials to logon)

2. SQL Server Authentication (server database credentials)

Note: -Select the option (ii) for the real-time work process.

e. Click conect …..Next….

Creating Databases(Enter click on database)

New database.window

Database Name: -DBNEW.

----click ok

Creating tables---DBNew—Tables—Enter click—

Select ‘New Table'

Column Name                                   DataType

PARTY ID                              INT

PARTYNAME                                     Varchar(50)

*Next save the table with a specified Name i.e., PARTY

*Click OK

Adding data to the table of the database e:-

Enter Click on TABLE—Select the option "edit Top 200 rows".

PARTY ID                               PARTY NAME

1                                        X

2                                        VINAY

Firing Queries on database:-

*Goto database (i.e.DBNew)

*right Click—Select new query

*SELECT*FROM PARTY;

*Click Execute (or) pres "F%"

*Insert into party values (3.'Madhu' ,'Hyd', 5000)

*Update party set party ID=20where party ID=222.

Connecting to the other database:-

Two ways:-

1.By specifying  <Data basename>.<Schema name>.<object name>

( Another database)

Eg: -SELECT*FROM VINAY_DB.TEST

SELECT*FROM SIVA_DB.TEST

(ii)Taking other base as current database

Syntax: -USE<DATABASE>;

USE SIVA_DB;

SELECT*FROM<Objectname>

SELECT*FROM PARTY_TEST;

Note:-

Second way is recommendable if we are firing many queries on the same database (Eg: -SIVA_DB)

 

 

 

SSIS [SQL Server Integration Services]

ETL Opeartions                                                  Administrative Tasks

E-Extracting—Getting data                            *Taking Backup of database

T-Transform-performing                                 *Sharing database

Intermediate                                                  *History cleanup

Operations                                                    *Transforming Database/Log/ error msg etc..

L-Load-Load to destination

DB----DB

DB----File

File---File

Databases:- Oracle, SQL server, Teradata etc….

Files:-XML, Excel, Flate File, Raw file etc

SSIS logical Architecture:-

 

There are 4 important components in SSIS Architecture

a. Object model

b. SSIS runtime

c. Integration services service

d. Data flow Task.

SSIS Designer: -IT's a native tool to create is packages and its components.

Object Model: -It is an application programming interface which connects and understand custom tools & components.

SSIS Runtime: - This is ‘CLR' which saves the layout of the packages (.dtx) runs the packages and manages the package components.

Integration Services Service: -This component helps us to store the packages in SQL Server database (MSDB), managing the packages and running the packages.

Data Flow Task: - To move the data between the sources to destination and perform different operations, data flow task is required. It uses various "Inline buffers" while processing the data. It uses a "data pipeline engine to move the data from source to destination & to manage buffers.

*Package and its components:-

Package is an important component in SSIS Architecture.

*It can be constructed through custom tools (or) native tools.

*It performs all operations such as "ETL" & administrative as part of its task –It uses various other components processing.

Eg: - Logging, event handling, package configuration etc.

*Control flow task is mandatory for every package.

*To move the data from source to destination data flow task required

ENGINES INSSIS ----------------a.SQL engine (Generates plan to execute package)

b. Data pipeline Engine (only inside data flow task).

Differences between DTS & SSIS

DTS                                                                             SSIS

*SQL server 7.0 introduced available                      *SQL Server 2005 onwards available.

2000onwards

*Designed for EST[Extract Transform                      *Designed for ETL[Extract Transform

Sources]     Load]

*It consists of a single pane (i.e., Pane means               *It consists of the multiple panes for. Screen or frame) for all operations It has                           multiple operations It has control

ata transformations workflow etc.                             Flow, Data flow, Package Explorer

Event Handling

*Data transformation available                                  *Data flow task introduced                                                                                                     Transformation embedded.

*No DSV [Data Source View]                                      *Available [Introduced]

No connection manager

No Event handling

No Looping through folders, files.

*Message boxes displayed inactive-X                    *Message boxes displayed in script

Script                                                                                                                     Task

*Less Transformations                                                 *More Transformations.

*Partial BI support(less)                                              *Full support to BI

*No Deployment Wizard                                            *Deployment Wizards are there

*Saved in

a. Enterprise manager (SQL Server)                         *Saved in local file system deployed

to SQL Server

b. File System (Structured storage file)

2008 to 2008R2

1.2008R2 is the ‘Second release' of 2008.

2. CODE NAME"KILIMANJARO"

3. Released in the middle of 2009.

Supported Features:-

1. Max 25 instances in CMS.

2. Max 256 logical processors in CMS.

3. Multi-server Administrator

4. MDS (Master Data Services)

5."Data-Tier" applications

6. POWPIVOT for virtualization

 

 

 

7. Full support to

*Data compression with UCS-2 code support

*Available editions

2008 R2 to 2011:-

*Its CODE NAME "DENALI"

*Multi subnet failover dusting introduced.

Programming Enhancements:-

*Creating Sequence introduced

Syntax:- Create sequence<sequence  name> START WITH

<value> Increment by <value>

Eg- create sequence x start with 1 increment by 1

Insert int Text values (Next value for x, "Vinay"emp.Id )

*Paging implemented in 2011:-

*It displays the required rows in page wise.

*Full-text search of Index introduced

*The usage of excel power..Pivot enhanced 50 that reporting models are created easily.

*Analysis service "BISM"(Business Intelligence semantic model) introduced.IT is a 3-layer model.

*WEB based Visualization (Project Crescent Introduced) It is a code name for representing applications for better visualizations.

SSIS Practical Architecture:-

Solution (Collection of packages)

Project (Collection of packages)

Package (A Discrete unit of work for doing ETL Operations & Administrative

Tasks)

Navigation:-

Start---> programs---> SQL Server 2008----> Click BIDS----> File----> New ----> Project---->

Select integration services project in templates window-----> Enter Name----Click OK.

[Project Name: Test –Solution-Project Location C:Test]

 

Page No-21-50

Note: - By default there is no solution is presented, so we can create a solution at the project creation time.

----> Click view menu---->Solution explorer (It describe the projects, packages, data source views, data source information) [1. It is used to connect to the DB

2. It can be reversible “ACROSS packages”]

Data Source view:-

It is the logical object for the physical collection of tables (or) View in data sources.

Connection Manager:-

For every connection we can take a name, the new name can be reusable within the program (or) package.

*In case of flat file: -Folder and the filename taken as connection string for connection manager

*In case of relation, Source: -Server name and database name taken as part of connection string in connection manager

Various ways of package Execution:-

a. By pressing “F5”.

b. Solution Explorer----->Packages---->rtclick---->Execute package.

c. Debug Menu ---->Start debugging.

d. SSMS---->Integration services--->MSDB---->Package--->rtClick----->runpackage.

e. By using DTUTIL, DTUTIL Exec.facilities …etc.

Colors and their meanings:-

White----->Ready to execute

Yellow----->Running

Red------>Fail

Green---->Success

Grey----->disable

 

 

Ensuring Package Success, Failure, Error, Bottlenecks:-

We observe this information in “progress Tab” (or)”Log providers”

a. Progress tab information: - It describes how the package validated and executed step by step from starting to Ending. Generally it displays

i. The No. of rows operated

ii. Source and destination connections

iii. The amount of time taken b/w one Statement to another statement etc.

b. Log Providers: - DISCUSSED LATER IN THIS BOOK.

Variables:-

It is the value which is changeable within the package, there are two types of variables. They are:-

1. The system defined variables: -These variables hold system information.

These Variables stores under SYSTEM <Name Space>

Eg: -SYSTEM:: PACKAGENAME

SYSTEM:: Execution ID

2. User-defined variables: - These variables are created by the user only. These variables store under username space.

Syntex:- USER::<Variable.Name>

Eg: -USER:: Name Var

USER::Temp_date ..etc.

Navigation:-

SSIS Menu---->Variables----->In the variables window click the top most left corner option to create the new variable.

Name          Scope     DataType         Value

I            package     int32                   1

Variable Scope: -The extent we use the variable is called the scope of a variable. There are two scopes.

a. Package Level: -Within the task only we can use the variable.

 

Working with Data Flow Task:-

To move the data from source to destination and to perform intermediate operations, this task is mandatory. Frequently used destinations are Flat File destination &OLEDB destination.

Real Time Modes of Flat File Sources:-

 

Eg: - Moving data from file to file   (comma Delimited)

Sol: - i. Take data Flow Task on control Flow.

ii. Go to data flow task ----> drag & drop Flat File Source & destination and do the below setting.

INPUT:-

Party –SRC--> Notepad

File Name: Party –SRC

No. of Rows: 13

FILE NAME: PARTY_SCR.TXT NO OF ROWS:2 CREATED BY: VINAY

Party ID,   Party Name,    Party Loc,        Party Income,     Party code.

1                Shiva             HYD                     30.000,                   30

2                Madhu       MUM                     40.000,                   60

*Connection Manager Name is reusable in the packages.

 

 

 

IS Project 1- Micro-soft visual Studio

OLEDB: - Object Linking and embedding Data Base.

Eg: -Moving data from table to table from source database DB-MSBI

DB-MSBI to MSBI _DB.destination data base

 

Navigation: -Open BIDS--->Take data flow task in control flow

 

OLEDB--->Object Linking and Embedding Database----> Universal provide to any database (or) application (Excel…etc)

OLEDB Source----->Rt click----> edit---->

OLEDB Connection Manager--->New----->New

Provide:Native OLEDB Native SQL Server client 10.0

Server name: Local Host

Select Authentication: Windows (or) SQL Server

Select Data base:DB_MSBI

Click Test connection ----->OK------>OK------>OK.

Name of the Table or view ---->Emp----> OK----->OK

OLEDB Destination----->RC------>edit

OLEDB Connection Manager ---->------> New

Provider: Native SQL Server Client 10.0

Server Name: Local Host

Select Authentication: Windows (or)SQL Server

Select Database: DB_VINAY

Click Test connection ------>OK----->OK----->OK

Name of the table or view ------>Click New------->Change Table

Name----->OK

GoMappings---->Connect required source columns to required

Eg:-Moving columns from one worksheet Excel to another worksheet in another Excel.

Eg: -Moving data from flat file to raw file. A raw file contains binary information, so we are not able to read.

------> Raw File Destination------->Edit ----->Connection Manager----->rtclick

Access Mode-----> File Name

File Name------>File Path

Write always----> created always

Select columns -----à            Columns------->OK

Eg: -Loading Data from XML file to a table in SQL Server database.

XML_SRC>XML

<EMPS>

<Student 1>

<EID>001_Vinay</EID>

<EName>Vinay</EName>

</Student 1>

</Student 1>

<Eid>002_Siva</EID>

<EName>Siva</EName>

</Student 1>

</EMPS>

XML Source---->rtclick----->Data access Mode---->XML Location

OLEDB dest------>Generate XSD----->OK

Note: - XML Schema definition must be specified (or) Generated to the corresponding  XML File.

Transformations: -These are the common operation performed between source and destination.

Eg: - Concatenation, Addition, Sorting, Merging etc.

There are different data flow transformations provided for different operations.

Sort Transformation: -It Sorts the data in the specified order (ascending (or) descending).

*It has some flexibility to do sorting on multiple columns

(By giving sort order)         “   “

Helps us to display unique rows in sorting by eliminating duplicates

Eg: - Display the data by Location in ascending order, within the locations names in descending order.

 Derived column Transformation:-

It performs operations row by row. It does different calculations, aggregations, Concatenations, transformations, conversions etc for the columns in the rows.

Eg: -a. Display Name and Location by concatenation

b. Display income, if it is Null 99999

c. Display income increment by 12%

d. Display a new field with current date as business date

e. Display default company code as 21000

Note: - i. When we receive the data from Flat File all the columns belong to string data type (DT_STR)

ii. When we retrieve the data from Excel Sheet, all the File belongs to (DT_WSTR)

  Data Conversions:-

Data conversions are done in three ways

1. By using type cast operator in expression.

2. By using type cast operator in Expression.

Syntax :-<type cast operator> (Column name).

(DT-I4)(Party.Income)

(DT_DBDATE)(‘2010-10-10’)

3. Directly doing at flat file source itself.

Rt click on FlatFile Source------>Show advanced editor----->Input.

And output properties---->Flat File source o/p---->output columns

------>Party income------>Datatype properties------->Datatype------->

Four-byte signed integer (DT-I4)

Aggregate Transformation:-

It performs aggregate operations such as Average, sum, count, Min, Max count Distinct, Group.by

*If the field (or) Expression is of numeric data type we can perform the above all the operations.

*If the field is string (or) date, we perform limited operations like Group, count, Distinct count etc.

Eg: -Display Location wise Income sum and average

 

Flat File Source additional Properties:-

*Ratain Null values from the source as null values. It help helps us to treat nulls coming from the source as nulls only (If we uncheck this option null treated as zero for integers, space for strings).

Error output options:-

In case of error (or) truncated values coming from the source we can use either of the below options.

a. Ignore Failure---->In case of error (or) truncations it ignores the failure.

b. Redirect Row---->In case of error (or) truncation it redirects the row to the another destination.

c. Fail component ---> In case of error (or) truncation it simply fails the component.

Navigation:- FlatFile Source----->rtClick------>edit------> error o/p.

Flat File Destination File Format:-

1. Delimited---->The columns are delimited by commas, except the last one which is delimited by the newline character.

2. Fixed Width -----> The columns are defined by fixed widths.

3. Fixed width with row delimiters----->The columns are defined by fixed widths.An Extra column, delimited by the newline characters is added to define row delimiters.

4. Ragged Right----->The columns are defined by fixed widths, except the last one which is delimited by the newline character.

OLEDB SOURCE PROPERTIES:-

OLEDB SOURCE ----->rtClick ---->Edit----->

Data        access mode:

Note: -WE USE VARIABLE FOR DYNAMIC SOURCE RETRIEVAL.

MULTICAST TRANSFORMATION

It creates multi-copies of the same source.so that instead of doing multiple operations on the same source in multiple packages we can do in a single package by using the multi cast.

*It improves the performance because it reads the data only one time from the source.

DRAWBACKS WITH INDIVIDUAL PACKAGES

i. Three Buffers for Source

ii. Multiple Reads on source

ADAVTAGES OF SINGLE PACKAGE SPLIT

 

i. SOURCE READ ONLY ONE TIME

ii. SINGLE TIME BUFFER OCCUPATION

MERGE TRANSFORMATION

It Merges Multiple Input data sources here the restriction is, the sources should be the sorted order so that the output will also be in the sorted order.

There are two (2) ways to Implement

a.IF THE SOURCES ARE NOT IN SORTED ORDER

 

 

 

 

 

 

 

 

 

DO THE BELOW PROCESS

CONDITIONAL SPLIT TRANSFORMATION

1. It splits data based on the condition

2. There are two type of output comes from this Transformation

a. conditions matched output

b. conditions unmatched output(or) dejault output

Eg: - Move Hyd, BANG, DATA TO SEPARATE FILES AND unparalleled DATA TO ANOTHER FILE.

---->CONDITIONAL SPLIT----->RT CLICK----->

OREDE         OUTPUT NAME            CONDITION

  1.       HYD-DATA     [PARTYLOC]=”HYD”
  2.       BLORE-DATA              [PARTY LOC]=”BLORE”

DEFAULT OUTPUT NAME

l---->UNMATCHED_DATA

UNION ALL TRANSFORMATION

*It merges multiple Input sources (Two or more)

*No Need to take the Input in the sorted Order so that the output will also have unsorted data.

LIMITATIONS:-

1. Input source structures should be same [No. of columns, an order of Data type of columns]

OLEDB SOURCE EDITOR PROPERTIES:-

Data access Mode:

i. Table or view-----> To retrieve the data from table (or) view

ii. Table name or view name variable---->Table name (or) view name takes from the variable

iii. SQL command --->we write a customized query to retrieve the data from the objects so that required rows & column retrieved and occupies less buffer every time.

Eg: - SELECT Party ID, Party Name FROM Party where Party code in (20, 40, 60)

iv. SQL command from variable------>we pass SQL command from a variable.

Note: - This variable generally recommended at the time of dynamic retrieval data

OLEDB destination Additional Properties:-   

Data access mode:-

i. Table or view

ii. Table or view fast load----> It loads the data very quickly compared to normal view (or) Table loading.

During the fast load, there are couple of options we must select according to the situation

*Keep Identity                     *Table Lock

*Keep nulls                          *Check constraints

Rows per batch [10000]

Keep Identity uses identity column generated values.

Note: - This fast load option is useful when the table is having clustered index.

iii.The table name (or) view name _fast load

iv.SQL command.

Question: -In data accessing mode after selecting Table (or) view option and clicking the new button to create a Table then when does the table will be established.

Sol: -After writing changes immediately table created (but not at execution time).

SSIS Expressions:-

1. write expression when they are small because too many expressions and complex expressions decrease the performance

2. Generally we use expressions in various places

a. Precedence constraints

b. variables

c. for loop

d. connection string in the connection manager

e. Derived Column Transformation

f.condtional split

*As part of expressions there are many functions, Type cases and operators available.

Mathematical Functions:-[MATHEMATICAL OPERATIONS ON NUMERICAL VALUES]

ABS, CEILING, EXP, FLOOR, LOG, LN, POWER, ROUND, SIGN, SQUARE, SQRT. Etc.

Eg:- ROUND(4.82)---->5        ABS(4.82)----->5        ceiling(4.82)---->5

ROUND(4.26)----->4    ABS(-3.92)----->4    FLOOR(4.82)----->4

String functions:-[MANIPULATES STRING COLUMNS/EXPRESSIONS]

LENGTH, LOWER, LTRIM, REPLACE, CODEPOINT, FINDSTRING, HEX, SUBSTRING, REPLICATE, REVERSE, RIGHT, ----etc.

Eg:-Lower(“ABC”)---->abc                Replicate(“a”3)---->aaa

LIRIM(“ABC”)---->ABC                Replicate(“VINAY”,”NA”,”NNA”)VINNAY

---->It removes the space                Substring(“ VINAY”,2,2)

TRIM(“abc”)--->abc                    2point string length 2---->”IN”

Find string (“VINAY”,N)----->3

Find string(“VINNAY”,”N”,3)--->5

DATE/TIME functions:-[TO WORK WITH DAY, GETDATE, GETUTC DATE, UNIVERSAL TIME COORDINATION

DAY(DT_DATE)”2011-09-04’----->04

MONTH(DT_DATE)”2010-09-04”---->09

DATEADD(“MONTH”,4, (DT_DATE)”2010-05-04”)----->2010-09-04

DATEADD (“MONTH”, 4, DATE)

DATEDIFF (“MONTH”,(DI_DATE)”2011-07-09”,(DT-DATE)”2011-04-09”)

NULL FUNCTIONS:-[VALIDATE NULL ARGUMENTS]

a.ISNULL(Expression)------>Result(True/false)

b. NULL(DT-DATE)------>NULL[USE FOR TO display “null dete”value]

TYPE CAST:-[CONVERTS ONE DATA TYPE TO OTHER DATA TYPE]

---->DT-I4(COLUMN OR EXPRESSION)

----->(DT__STR, “Length”, Code page”)(column (or) Expression)

---->(DT_WSTR, ”Length”)(col(or)Exp)

----->(DT-Numeric, precision, scale)(“)

Eg:- (DT_Numeic,2,6)(123456)---->123456

OPERATORS:-

& & ………..Logical AND

//……….Logical OR

?:------>writing if condition.

<Expression>?<success statement>:<fail statement>

Eg: -ISNULL (PARTY NAME)/”UNKOWN”: TRIM (PARTY NAME)

*Display expression if the date is null (or) the date length is zero(or) the date is having the null date, display null otherwise display date.

ISNULL(JDATE)//LEN(TRIM((DT_WSTR, 10)JDATE))==0//

JDATE==”00-00-0000”?NULL(DT_DATE):JDTE

List:-ADD, Concatenate(+), Subtrct, Negate, Multiply, Divide, modulo(%), parentheses([]), equal(==),uneqal(!=)greater(>)…….etc.

*Display an expression where the file name should be appeared with the current time stamp in this format.

FILE NAME-YYYY-MM-DD-HHMMSS.TXT

”2011-08-09 04:23:22:00000”à        CDATE--

“FILE NAME –“+

Substring ((DT_WSTR,30) CDATE,1, 10)+”-“+

Substring((DT_WSTR,30)CDATE,15,2)+

Substring((DT_WSTR,30)CDATE,18,2)+”.TXT

Note: -Fast Load option is useful when the table is having clustered Index.

b. Incase Sorted input sources

*Take 2 Flat Files and assign locations

---->GoTo each flat file---->Rt click--->Show advanced Editor

---->Input and output properties---->Flat File source output

Is SORTED: TRUE

Data Types: change DT_I4àparty ID---àOutput columns---

Sort key Position:1

Flat File Source Error output---->ISSORTED: TRUE

OUTPUT Columns---->Flate file source error output column:

Sort key position:1 OK--->OK

Note: -Merge works with two sources at a time

Differences between Merge & Union All

                   MERGE                              UNION ALL

Only two sources                     No Limit

Input should be sorted             Not Applicable

Sorted result                            Not Applicable

Merge Join: -It perform merge operation along with joins. Generally it supports the below joins

a.Inner Join

b.Left Join

c.Full Join

Emp Table                                                                         Dept Table       

EIDENAMEDID
12     AB1020

 

DIDDNAMENULL
10IT?
40HR?

 

 

Query servcture:- Select COLS/*FROM<TABLE A> CROSSJOIN<Tableb>No where condition

Inner Join<Table B>ON <condition>

Left outer Jion<Table B> ON<Condition>

RightOuter Join<Table B>ON<Condition>

Full Outer Join<Table B>ON<Condition>

Eg: - SELECT E.EID, E.ENAME, E.DID, D.DID, D.DNAME FROM

EMPE cross Join Dept D

Inner Join Dept D ON E.DID=D.DID

Left outer Join Dept D ON E.EDID=D.DID

Right outer Join Dept D ONE.DID=D.DID

Full outer Join Dept D on EDID=D.DID

OUTPUT:-

 

 

 

 

Catch creation:-

 

 

----->Merge Join---->rt Click----->Merge Join Transformation Editor----->

Join type: Left out Join

 

Look Up Transformation:-

It looks up the required values on target and fetches relevant result.[Exact result]

Real Time Usage:-

a. To Fetch relevant value

b. While working with SCDs [SLOWLY CHANGING DIMENSIONS]

C. To have exact match with destination  and to improve query retrieval fast (It uses Caches)

Types of Eaches:-

1.Full Cache

2.No Cache

3.Partial Cache

No Cache: - Here there is no cache to the target table. So every time source query hits the database and

Fetches the result

Advantage: -If the source data changing frequently and less no if records are there it is recommended

 

 

 

Drawbacks:-

1. Hits on the target increases and traffic also high.

Full Cache: - Here there is a cache for the target table, so every source request goes to the cache and fetches the data

ADV:-1. If the target is not changed and having more records.

Partial Cache: -Initially the cache is empty, for every new request source query hits the database and fetches the information to the cache. For Existing record, Source query hits the cache.

Adv:-1. More & More new records are added to the destination there is an enormous usage in existing records as well.

 

Look Up result:-

If there is no match in the lookup we can go for either of the below (ways)options.

a.Ignore failure

b.Redirect row to error output

c.Fail component

d. Redirect rows to no match o/p.

If the source is having multiple matches in the destination it returns “First Match”

 

 

Look up Performance improvement:-

a.Increase (or) decrease the cache memory according to the target table size, because more rows with the big size cache give the bad performance.

b.Instead of taking the table, make an SQL query to have required no of rows and columns in the cache and to perform look-up operations.

Look up----->rt Click ---->Edit-----> Connection----->Mark use results of an SQL query option and write this type of customized query “[SELECT DEPT IS, DEPT NAME FROM DEPT WHERE DEPT ID INE 10,20]”

Eg: - Retrieving the dept id, name from debt, table based on the  match from emp table (using full cache)

i.Matched records in one destination &

ii.Unmatched records to another destination.

Sol:-1.OLEDB Source(Emp Table)

2.Lookup --->RC ----->Edit--->

General

Select full cache

Specify how to handle rows with no matching entries .i.e, Redirect rows to no match output

Connection

OLEDB Connection manager: DB_MSBI

USE table or view : Dept.

Columns

Connect DID from EMP TO DEPT and Select DID,

DName from Dept.

OK

OK

3.Take two destinations and connect matched result to one destination and unmatched result to another destination.

Note:-

1.Unmatched result destination structure is like source table and it contains source unmatched records.

2. Lookup operations we can perform only on relational tables(we can’t perform on FALT FILES)

Working with partial CATCHE:-

In the above step do the below changes

NAVIGATION:-

 

Option:-

Note: -The cache created and dropped automatically

Creating the Name Cache:-

1.This cache is shareable across multiple packages.

2. This is recommendable if the lookup table data is not changed such a long time.

 

Navigation:-

Start----->Programs----->open BIDS------>New--->Project------->Package

----->Control Flow task ----->cache transform---->steps (refer page no.346)

Using the precache (or) named cache for operations:-

To do the operations take some changes in the “Look up”

i.e. Navigation:-Lookup----->rt Click----->Edit --->Lookup Editor

 

Specify now to handle rows with no matching entries Redirect rows to no match output

Goto connection:

Cache connection manager-----> New--->Name----->

Check Use cache file name

OK

OK

Check columns, OK, OK

Fuzzey looks up transformation:-

This transformation is designed to get the result from the destination based on the similarity but not the exact match.

Which doing the operations there more columns added for estimation

Similarity: -It display hoe much similar source row with unestimation row(column)

 

b.Similarity _Column name:-

It displays how much similar each source column with each destination column

c.Confidence: -How  much the system is having the confidence to give the result.

Note: - we go for similarities for string values

 

 

 

 

 

Navigation :-

Fuzzy Lookup:-

1.Take OLEDB Source as emp

2.Fuzzy look up  ----->rt Click-----> Edit----->

Reference Table

OLEDB connection manager:DB_MSBI

Table or view: Emp_address

Columns:

Connect Ename From EMP to ENAME in EMP_addres and select ENAME and address columns From EMP_add  OK OK

3.Take conditional split----->Rt Click------->edit

Order               Out put name            codition

1.                                 Partial _match            -Similarity<0.7

2.                                 Good_Match            -Similarity>0.7

3. Take two destinations and connect each condition.

4. Run Package.

 

PIVOT Transformation:-

  1. It converts rows information into columns.

2.      It converts less normalization of data.

 

Ex:  pivot I /p: -                                                                                                 pivot o/p:

Arty Id   sale component   sale amount                                 PRITYID      HRA          DA                    TA

1                         HRA               20,000                                           1              20000        200000              100000

1                       DA                   200000                                           2              30000        30000                150000

1                           TA                100000                                           3              40000        400000              200000

2                        HRA                30,000

2                             DA              300000

2                          TA              300000

3                         HRA                 40000

3                          DA              400000

3                           TA              200000

 

 

PIVOTED Column:-   

It doesn't participate in pivoting.

PIVOTED Column: - column values that are converted from rows to columns.

 

PIVOTED Value: -The values that are moved to pivoted columns.

 

COLUMN NAME                        Pivoted usage

Party ID                                   1-------------- un pivoted

SAL Component                     2--------------- pivoted columns

SAL amount                            3--------------- pivoted   values

 

LINEAGE ID: - It is a unique ID taken by a system for every column that is mapped.

 

Navigation:- 

1. Take pivot _SRC as the source.

  1. pivot------ click------ edit-------

Input columns:  select all columns.

Input-output properties:

Pivot default Input:

Do the below settings

Party ID-----pivot usage ----- 1

SAL component -------pivot usage------2

SAL amount ------pivot usage------3

Note: -     identify lineage ids of party ID, Sale component, sale amount

Ex: -   column name                        lineage

Party                                     349

Sal component                     371

Sal amount                           375

PIVOT Default o/p:-

*Click add column ----Rename column to party ID ---- Go to properties and set

Pivot key value: party ID, Source column: 349

*click add column ------>Rename column to HRA ----->Go to properties

And set pivot key value: HRA, source column: 375

----->click add column ------> Rename column to TA ----->go to properties

And set pivot key value: TA; source column: 375

---->click add column ---->Rename column to DA ------>go to properties

And set pivot key value: DA: source column: 375

Audit: -   It displays audit information for every row coming from the source or It adds audit information to the source data.

Ex: -   Audit types:-   1.  Execution instance Guide

2. Package    ID

3.  Package name

4. Version ID

5. Execution start time

6. m/c name

7. User name

8. Task name

9.Task ID

Character Map: - It applies string operations such as lower to upper and vice-versa etc.[………………….]

Copy column: - It creates multiple copies of the column.

Export column: - It exports column value from rows in the dataset to a file.

Ex: -   exporting, Images from column to a file.

Import column: - It imports into column, values from a file

Ex: - Loading images from file to table rows.

Fuzzy Grouping: -It groups corresponding rows.

OLEDB Command:-It executes an SQL command for each row in a dataset.

Percentage sampling: - It takes sample portion no of rows from source data set.

Ex: -Taking 20% of samples rows from a dataset.

Row Sampling:-  It displays the specified no of sampled rows.

Ex: - Display 10,000 sample rows from a table.

Row count: - It counts the no of rows in a dataset.

Un pivot:-It converts columns information into rows  that is It creates more a normalized representation of data set.

Script component:-It executes a customs script (VB.Net) or (C#.Net)

Term Lock up: - It counts the frequencies that name in a references table appears in a dataset.

Control Flow Items:-It contains containers & tasks

Containers: - It contains other tasks.

Ex: - For each loop container

For loop container

Sequence container

Tasks: - There are normal (and) as well as maintenance tasks.

For loop container: - It executes the underlying task to the specified no of times.[iterative count we know here]  it as 3 sections.

A. Initial section: - The variable are initialized in the container.

B.  Assignment section: - Here the condition verified.

*For loop requires a "variable" to do the operations.

Create variable:-

SSIS menu ----->variables ------>Add variables ------>

Name: counter

Data type: Integer.

-->Take for loop container on flow ---->RC---->Edit.

INITIAL Expression: @ counter =0

Eval . Expression: @ counter <5

Assign Expression:@counter = @counter+1

----> Take data flow

Take flat file source (To be copied) and OLEDB destination connect it

NOTE: - The above ex is for loading a file data 3 times to table

For Each Loop container:-

-----> It is designed to load the group of similar objects or working with similar objects whose count is "unknown".

-----> For loop is having a condition so that we know the count; whereas    for each loop we don't know the count.

Ex: - Loading set of records to dataset or similar dataset records one by one

---> Loading similar files from a folder to a table etc.

---> It uses enumerator for its operation.

---> The enumerator that supports are

For each file enumerator

For each item enumerator

For each ADO enumerator

For each ADO.NET schema rows enumerator

For each from variable enumerator

For each node list enumerator

For each SMO is enumerator

---->Enumerator values are not changed within a package (variant values are changes).

Ex: - Loading the available files in the same structure from a folder to a table.

 

Navigation:-

Take For each loop container ---->RT click---->edit

Collection:

Enumerator: for each file enumerator.

Folder: c:outputgroup

File: ".TXT"

Retrieve file name: select fully qualified

Select traverse folders

Variable mappings:

Variable Drop Down ----> select new variable – name: Group var --->ok

2. Take data flow task in the for each loop container.

          3. Data Flow Task: -   Take flat file source ---->specify one file in the group

Go to source connection manager ----->rt click----->properties ---->expression--->click ellipse---->property drop down list: select connection string.

Expression: click ellipse.

Variables: Drag & drop group var to the expression section.

Ok----->ok

Take OLEDB destination and connect.

File system Task: - It performs file and folder options such as copying, moving, deleting, creating….etc.

Ex:-1.Moving the files from folder ‘x' to folder ‘y'.

  1. In above for each loop container example load successive files into another folder[that is which are not loaded that can be easily tracked]

Navigation:-   File system task ----> rt click ---->edit--->

Destination: Is destination path variable: false

Destination connection: Browse to success folder.

Overwrite Destination: False.

Operation: move file.

Source: Is source path variable: false.

Source connection: Browse to source connection manager.

Execute package task: -   It executes the packages which are available in "file system" and "SQL server database".

This is designed to execute another package within the main package. We can control the flow b/w these packages.

1. Take execute package task on control flow

2 rt click --- edit----

Location: File system

Connection: specify desktop any folder package.

Password: If password is there, specify

Execute out of the process: false.

Note:  Execute out of process ‘true' means, each package runs with a separate process.

Execute SQL TASK:- It executes SQL of any database (oracle , tera data, excel ,SQL Server…..etc).

To connect to the corresponding database, we must specify the corresponding type.

It executes queries, commands in the corresponding database.

Note:   1.If we turn the execute out of process option into true the sub packages run separately from the main packages process.

Navigation:- 1. Take Execute SQL task on control flow.

  1. Rt click ----- edit

Connection type: OLEDB

Connection: Local Host: DB_MSBI

SQL S statement: Delete from party;

Bypass prepare: true

If select "false" instead of "true", in this situation SQL converts into the another Query and it runs every time on the target Database.

Working with procedure:- 

Create procedure [DBO].[sap] @ PID integer ,@ PNAME

varchar(30) AS BEGIN Insert into for each _ tab(party ID, party Name)

Values (@PID, @PNAME);

END

EXEC SAMP 10, KKK

Select * from for each _Tab;

Drop procedure sap.

Executing the procedure from executing SQL Task:-

SQL Statement: EXEC SAMP 20, driven

Script Tasks: - It supports the scripting of Rt click ----- edit

Script language: Microsoft visual Basic 2008.

Click Edit Script ------ Add the below statement in the main ()

MsgBox("MSBI class")                          save ------------ ok

 

Real Time:-

1. To reuse the existing code of OLTP systems coding.

2. To write custom coding.

Ex: - Loading Multiple worksheets data in a single excel sheet to a table.

Bulk Insert Task:-

1. It loads bulk data with max speed into the tables.

2. It cannot perform any intermediate operations.

3. Before Loading into the table, the table should already be created.

4. It loads files only [direct file data to already (related table)]

Bulk Insert Task Navigation:-

Rt click ---- edit-----

Destination connection: Local Host: DB_MSBI

Table: party

Format:

Row Delimiter: {CR}-{LF}

Column Delimiter: ,(comma)

Source connection:

File: Browse the File.

Backup Database Task:-

It   takes the backup of SQL Server databases.

Backup ------------ rt click ---------- edit ----------- Backup Database Task-----

Name: SRC

Backup: Full

Databases: DB_MSBI

Backup to *Disc .O tape

Create a backup file for every database ----- click o/p

Send Mail Task:-

This is designed to send e-mails to corresponding recipients.(simple mail transfer protocol)

 

It requires an SMTP Server.

Navigation:-

Send mail Task ------ RC ------ Edit ----- Mail ------- SMTP connection---------- new ----- specify name ----- SMTP Server (IP address) ------ click ok

------from(which user)------ To (which user) ----- subject (Job  finishes ---- successfully,

----- Message source Type: Direct Input ---------- priority: High ----- click ok.

Active script Task:- It parses and executes active scripts.

Analysis services Execute DDL Task:-It executes DDL operations of analysis services.

Analysis Services Processing Task:-this is used to process the data of facts cube, dimensions in analysis services we use this task.

Execute DTS 2000 Package Task:-It executes SQL Server 2000 DTS package

Execute process task:-It executes win 32 executable tasks.

TTP Task:-It performs file operations such as sending, receiving files.

 

 

 

The above Tasks transfers the specified objects from "one SQL Server instance "to "other SQL Server instance".

Slowly changing dimensions:-

To process the data from granularity tables to main tables, we follow a mechanism called slowly changing dimensions type.

Ex: There is a customer table where it holds customer details. If there is any change in CD'S there should be a manipulation in the process.

 

 

 

 

 

 

Slowly changing Dimensions [SCD'S]:-

 

Type-1:-1.New Record Inserts

2.Old Record update/Modifies

 

 

Here in EMP-History after the type-1 performance is done the location of binary is changed from the location HYD to USA (that is replaced).

Note:- one customer only one location

Type-2 :-

1. New records inserted with version ‘o' (zero)

2. Old records are inserted with increment version (that is either 1 (or) any).

N

Note: - highest version indicates the current location of the customer.

Type-2(status Mechanism):-

1. New record inserted with status ="current"

2. Old record

a)      Inserted with status ="current"

b)      Modifies earlier record status toexpired.

 

Note: - status =current means the customer is in that location.

TYPE-2(Date Mechanism):-

New   record inserted with start-date and end-date as ‘9999-12-31'

Old record

Inserted with start date and end=date as ‘9999-12-31'

Modifies old record end –date to new record start –date

Note:-current Location of customer identified with end-date='9999-12-31

TYPE-3: - For every customer previous /current Locations maintained.

Note:- if you want to increase the number of history rows those many columns should be added to the history tables  ,which is burden to the system.

In real time we use frequently type-2(date mechanism)   as history maintenance mechanism. [see 57 page no for practical].

Maintenance cleanup task:-It removes files left over from a maintenance plan

Notify operator task: - it sends an e-mail MSG to any SQL Server agent operator.

R-built Index task:-   It arranges (or) re data on the index pages by rebuilding the index .This  improves  the performance of index scans and seeks.

R organize Index task:- It  defragments and compacts  clustered and non-clustered indexes on tables and views.

Shrink Database task:-  It reduces the disk space consumed by database and log files by removing empty data & log prices.

Update statistics Task:- It updates statistics of the object if there are already collected.

Execute T-SQL Statement Task:-  It executes T-SQL Server database ,commands, Queries…..etc.

Go to Execute T-SQL Task ------- rt click  ------ Edit  -------   Execute

T-SQL Statement task ------- connection ------ SRC (Take any connection)

T-SQL Statement:  USE DB-MSBI;

Delete from EMP_NEW1;

Implementation of slowly changing Dimensions (SCD)(through Wizards) : -  In SSIS along with type -1 & type-2 we implement fixed attribute.

Fixed attribute: - In this situation the important business information field taken as "fixed". If there is any change for values in the column we can take appropriate actions like falling or ignoring the operation.

Navigation   :-

  1. Take   OLEDB Source.

2. SCD ---- rt click ------ edit----- Next-------------

Connection Manager: DB_MSBI

Table or view: EMP_HIST

Specify the column EID AS business key.

Set change type: fixed attribute.

Click next.

Select fail the transformation if changes are detected in a fixed attribute ------- next ----- next --- finish.

SCD Type -1:- (changing attribute)

Like above with only below two changes

  1. Set dimentions column: part loc.
  2. Set change type: changing the attribute.

SCD Type-2:- (status mechanism)

Like above with only below changes.

Set change type: Historical Attribute.

Select use a single column to show current and expired records.

Columns to indicate current record: status

Value when current: current

Expired value: expired.

SCD TYPE-2:-   (date mechanism)

Select use start and end dates to identify current and expired records.

Start Date column: start _date

End date column: End _ date

Variable to set Date values: system ::  container start time

SCD Type -2:- (Date mechanism manually)

(FIG(A))

 

 

Navigation:-

OLEDB Source: EMP-Daily

Look up: rt click ------- edit

General:

Cache mode: full cache

Specify how to handle rows with no matching entries: redirect rows to no match output.

Connection:-

OLEDB Connection manager: Local Host: DB-MSBI

Use Table or view: EMP _HIST.

 

Columns:- 

Connect EID, ENAME, ELOC, to  HIST-table  for validation click ok -------------- ok

OLEDB Command: -   rt click ----- edit

Connection manager Tab:-  

Connection manager:

Component properties Tab:-

SQL Command: update EMP-HIST set END-DT =? Where EID =? And END-DT is Null.

Columns Names Tab:

Connect EID to parameter-1, ST-DT to parameter click ok ------ ok

DF-insert:-

OLEDB Source: EMP – Daily

LOOK UP: Rt click ------edit

General:

Take OLEDB Destination as EMP_HIST and connects the fields except END-DT.

Data profiling Task   :-

1. Newly added in 2008

2. This take is helpful to profile the data before proceeding further.

3. Generally it profiles the below information:

Go to data profiling task --------------- rt click --------edit---- profile requests-----

Profile type: candidate key profile request

Column length distribution profile request

Column Null ratio    profile request

Column pattern    profile request

Column statistics profile request

Column value distribution profile request

Functional dependency profile request

Value inclusion profile request

Create ADO.NET connection for the source table

1.SSIS Menu --------- New connection -------- ADO.NET -----Add -------

Data connections: MSBI-DB

Data profile task on control flow.

Rtclick -------edit

General:

Destination Type: File connection.

Destination: New connection -------- create file -------

C:party -profile – Request.

Click Quick profile ------

ADO.NET connection: MSBI-DB

Table or view: party

Compute:

Select the required options

Click ok ----------- ok.

3. Execute data profile Task & observe the file.

4. Start Menu -------- programs -------- SS2008 ------- Integration services ------- data profile viewers ------- open ------- specify the file path (C:party-profile-result) and monitor the analysis by selecting each attribute.

NOTE:- In other ETL tools for profiling we need to go 3rd party tools.

Various ways of debugging:-

There are three ways of debugging .They are

I.            By executing the package partially.

If multiple tasks are present in a package then we can execute a specific task.

rt click that task ----->click execute task.

II.            By Break points.

To stop the execution of package at a particular “event “  and to continue or stop the events , break points are used

We perform break point only in control flow.

Consider the (FIG(A)),taking break point after DF-update successful.

Navigation:-

DF-update----- >rt click----->edit break points ----->mark break

While the container receives the on post execute event option ---->click ok.

---->Execute the task so there is a break after DF-update.

Monitor the statistics or observe the expected result, based on the result.

a)      In case of continuation ----->go to debug menu ----->click continue

b)      In case of stopping ----> go to debug menu ----->click stop debugging.

 

 

Hit count:-we will take it along with the break point to have a break according to the condition. This is applicable and usable .generally when we are having containers like for loop etc.(except the type always).

Types:-

  1. Always:-Execution is always suspended when the break point is hit.  Ex: always
  2. Hit count equal to:-Execution is suspended when the no of times the break point has occurred is equal to hit count.
  3. Hit count greater than or equal to:-Execution is suspended when the no of times the break point has occurred is equal to or greater than the hit count.
  4. Hit count multiple:-Execution is suspended when multiple of the hit count occurs.

Ex:- if we set this option to five, it takes a break for every fifth time.

 

 

 

    III.            Data viewer:-

These are used only in the data flow task b/w source, destination and transformation. We take this data view option only in the links between the components

Go to any link in the data flow task -----> rt click ---->data viewers----->click add----->select grid -->ok.

 

Precedence constrains:-

These are useful to control the flow between various tasks in control flow.

Constrains  options:-

There are two evaluation operations.

a)      Constraint:-

Success (Green link):- if previous component is successes it executes the other.

Failure (Red link):- if the previous component is failed it executes the other.

Completion(Blue link):- if the previous component is either success or failure it runs the other.

 

b)      Expression ---> it supports an expression .when it satisfies or reaches it executes the other task.

Ex:-@ counter= 6

c)       Expression and constraints ----> if both are satisfied then only the other task executed.

d)      Expression or constraint ----> if either of these are succeeded it executes the other task.

 

Implementation of Expression:-

  1. Declare a variable called “counter”.
  2. Take a for loop and do the below settings

---->rt click ----->properties:

Initial expression: @counter = 6

Evaluation exp:@counter<9

Assign expression:@counter =@counter

------>rt click the link ---->edit

Evaluation operation: expression and constraint value: success

Expression: @counter==6

Multiple constraints:-

We take multiple constraints to inter operate and to control the execution of the constrained task.

Two ways:-

  1. Logical AND. All constraints must evaluate to true.
  2. Logical OR, one constraint must evaluate to true.

Note :- Work Flow3 can be controlled by precedence constraints.

Check Point:- Check Point configuration help us to resume form the last task in the package .i.e., If multiple tasks are there in a package. If there is any failure in any task it stores the fail task point in the check point file, once we restart the package the check point help t start from last point specified deletes in check point file. Once the package is succeeded it the check PLF.

1. Take two Execute SQL Task, one task with correct SQL command and second task with improper command (So that it fails).

2. Go to each task ---->Properties------->Fail package on failure

3. Control Flow ----->rt click----> Properties------>

Check Point File Name: Desktop---->Check File:Txt.

Check Point usage: Tf exists.

Save Checkpoint: True.

4. Execute the package, as 2nd task is failed, check point file is generated.

5. Rectify the SQL command in 2nd task and re run the package, then it start from 2nd task instead of 1st task.

Logging:-It uses various log providers to task the log information at particular “event”. The log providers are

*SSIS log provider for windows Event log

*SSIS log provider for Text files

*SSIS log provider for XML files

*SSIS log provider for SQL server

*SSIS log provider for SQL server profiler.

In real time this log information is used to perform the below tasks.

a. To eliminate bottle necks

b. To trouble shoot the package

                This log information is different from progress tab information because

i.It contains auditable information.

ii. Start point, End point of the tasks available in the package

iii. Machine name, operator name etc

Navigation:-SSIS Menu----->Logging---->Select package (or)

Task in the left hand side panel---->

Right hand side

Provider Type: SQL server provider for text files---->

 

EVENT HANDLING:-

Implementing action at a particular event is called event handling

Eg of events:-     a. On post Executes

b. On pre Executes

c. On Information

d. On Error etc…..

Event handling eg:-

1. Sending an Email after successful execution of package

Navigations:-

Go to Event handler tab---->

Select                    Executable                          Event handler

Package                               On post execute

Drag and Drop send mail task, do the configuration. Run package and observe the result

2. Delete the data before loading the data into GRPLOAD table

Navigation:-Go to event handler tab

Executable                                          Event handler

Dataflow task                                    On pre-execute

Take execute SQL task and do the below settings

Connection: DB_MSBI

Sale statement:-Delete From GRP LOAD;

Execute package and see the expected result

Configuration--->New connection ------>create file--->specify file location ----->OK---->OK.

Details section---->select on post execute---->

Click OK------>OK

Package Configuration:-
                These are help full while migrating (or)moving the packages from one environment to another environment.

Development to testing

Testing to production…..etc

There are many way as we create configuration.

1. XML Configuration

2. Windows registry enter variable

3. Environmental variable

4. Parent package variable

5. SQL server database

Note:-In real time the frequently used configurations are ”XML and SQL server Data Base”.

As XML is industrial standard with platform independent,

Most of the organizations are preferring it

XML Configuration:-

1. Take Data Flow task----->Flat Flie Source (C:Hyd.Txt)----->

Flat File destination (c:Hyd_opt.Txt)

2. SSISMEnu------>Package configurations------->check enable

Package configuration------->click add----->Next

Configuration Type: XML configuration File.

Configuration File Name: Browse and specify filename (new)

Click Next----->Check the connection string property for source connection managers

Click Next---àFinish------->close

3. Goto configuration file ----->open using any editor like Microsoft visual studio version selector(or notepad or word pad…etc)

Change source filename to c:Balglore.Text.

Change destination file name to C:Banglor.opt.Txt save

1. Run the package now the package runs with the configuration file settings

Employment &security:-

Providing a run able solution at testing (or) production generally we go for deployment (moving the developed application from one environment to another environment)

In SSIS there are two deployments.

a. File System Deployment:- In this case the packages deployed to a file system (i.e., to a specified drive and folder)

b.SQL server Deployment:-Here packages deployed in SQL server Integration services.

*To deploy the packages we require manifest file.

*Manifest File contains the information which is important at the time of deployment.

Note:-In real time we always perform the second type of deployment (i.e., SQL Server deployment)

*It holds mental data in formation of package and its components (CONFIGURATIONS, SECURITY, PROTECTION ETC…….)

*When we built the project (or) solution manifest file updated created file

Manifest file creation:-

Solution Explorer

l------->Project

l------->rt click

l------->Properties

Build: BIN

Deployment: Create Deployment utility:

                                Allow configuration changes: Tipe

                                Create deployment utility: True

Deployment output path: BIN Depolyment

Build menu----->Build SSIS Project

Go to solution----->Bin---->Deployment------->Observe manifest file

 

a.File System Deployment:-

Go To Mani fest File----->rt Click------>Deploy---->Next----->

Select File System deployment ----->specify Folder to deploy

------>Click Next---->Next---->Finish

Go to deployed folder and observe the packages and configuration files deployed (or)not.

Execution:-Package------->rt click----->open with------>SQL server

2008 Package execution utility----->run (Execute)

SQL Server Deployment:-

Manifest File----->rt Click------>Deploy----->Next

----->Select SQL Server Deployment------>Specify

Server Name: Local Host

Package path: Maintenance Plan

Click Next----->Next----->Finish.

b. Servering:-

Goto SSMS---->Integration services----->Stored

packages ----->MSDB----->Maintenance Plan

Running Packages: Maintenance Plan ------>Packages------>rt Click

------>Run Package---->execute.

Note:-The Property allow configuration true, allows configuration changes after the deployment.

Applying Security:-

Two levels: i. BIDS LEVELS----> Password Protection.

ii. SSMS-------->Role base security

Password Protection:- It help us to prevent from

i. Unauthorized deployment

ii.Unauthorized manipulation to the packages.

BIDS level for having better security, along with password we take “Protection level also”

There Protection levels available are:

a. Don’t save sensitive

b. Encrypt sensitive with user key.

c. Encrypt sensitive with Password

d. Encrypt all with pass word

e. Encrypt all with user key.

f. Server Strong.

Sensitive Information:-

Generally Package connection strings, user defined variables, enumerators……are considered a sensitive information.

=>Go to control flow ----->rt click----->Properties---->security----->

In Package password option assign.password----->In Protection

Level option select “Encrypt all with password” option.

Build Solution and test in either of the ways

a. By opening the Solution again

b. By deploying the Manifest file.

In the above two situations it asks password

SSMS Level Security:-

A user or group assigned to a role and every role will have responsibilities, so the users act according to the responsibilities.

1. Creation User/Group:-

My Computer----->Rt Click----->Manage----->

Local usere and Groups ----> Users ---->rtclick---->New

User.User Name:VINAY----->Click OK

 

SSMS LEVEL:-

StartMenu-----> SSMS---->Data base engine--->Security

----->logins---->rtclick---->New login

Login Name----->Search----->Vinay----->OK

Click OK

System Databases----->MSDB----->Security------>users---->

Rt click---->New user

User Name: VINAYUSR

Login Name: Rowan Vinay

Check the required owned schemas & role members.

Eg:-Check DB-Data reader, DB-Backup operator ….etc

Sol:-SSMS---->Integration services---->connect---->

Stored packages---->MSDB---->Data collector ---->rtclick on

Any Package----->click package roles---->Specify the roles in the reader and writer sections.

Incremental Loading:-

 

1. Maintaining History [SCD]

2. Direct Loading

3. ETL Loading(Indirect load)

Eg:-Loading the specified days data from one table to another table

Sol:-Go to SSIS---->Variables---->Click add & create two veriable

 

NameScopeDataTypeValue
EDATEPackageDatetime6/2/201112:36am
SDATEPackageDate6/2/201112:36am

 

OLEDB Source----->rtclick----->edit----->

Click parameters:

Parameter 0: USER: SDATE

Parameter 1: USER: EDATE

3. OLEDB Destination--------> EMP_HIST Table

4. SSIS Menu ------->Package configurations----->check enable

Package configuration----->Click add---->Next

Configuration Type: XML configuration file

Configuration File Name: Brows and specify file new name

Click Next----->

Go to SDATE&EDATE and check the value sections.

Next----->Finish----->close

Open configuration file------>change SDATE and EDATE &run the package.

Working with import Export Wizard:-

It performs operations between database to database, data base to file, file to file.

 

 

BIDS LEVEL:-

Go to Solution Explorer

l------->rt click SSIS Package

l------->Select SSIS Import and Export Wizard

l---->Next

l----->Choose a data source.

Data source:

Server Name: Local host

Database: DB-MSBI

l------>Click Next---->

Choose a destination

l----->Database

l------->DB New

l------>Click Next

l---->Specify Table copy (or) Query

l------>Select the option copy the data from the tables

l---->Click Next

l---->Select the options

*copy tables or views

*Write a query to retrview the data from dataset

Click Next

l--->Select the tables(trying to move)

l----->Click Finish

Now the system creates a package according to the setting given, execute the package and observe the result i.e., go for DB New in Databases and observe

 

SSMS level:-

Database Engine

l-----> Go to any database

l---->rt click

l------>Tasks

l---->select Import data(or)Export Data

Working with Transaction and Isolation level:-

Transaction:-It is the logical collection of statements (or) step which can be succeeded (or) Failed

The transaction isolation level determines the duration that locks are held

i. Read uncommitted:-This is often referred to as Dirty read “because we can read modified data that hasn’t been committed and it could get roll back after you read.

ii. Read Committed:-It acquires share locks and wait on any data modified by a transaction in process. This is a SQL ”Server default”

iii. Repeatable Read:- Same as read committed but in addition share locks are retained on rows, read for the duration of the transaction

In other words any row that is read cannot be modified by any other connection until the transaction commits (or) Roll back.

  1. Serializable:-

Same as repeatable read but in addition no other connection can insert in rows, if new rows would appear in select statement already Issued.

In other words if we issue a select statement in transaction using the serializable isolation level we will get the same exact result set if we issue the select statement again within the same transaction.

Transaction Options:-

*Required ----->If transaction exists join it, else start a new one.

*Supported----->If transaction exists join it (this is the default)

*Not Supported-------> Don’t join in Existing transaction.

 

 

Navigation:-

Select *From Party with (NO LOCK):

See the query result (It displays uncommitted data)

5. Package------>Debug menu----->continue

[2nd Task Failed, so sequence container also failed]

6. SSMS----->Data base Engine----->DB-MSBI------>rtClick----->

New Query: Select *from party;

See the query result (It display old data, because newly added data is rolled back).

Note:-MSDTC (MICROSOFT DISTRIBUTED TRANSACTION CO-ORDINATOR).

Creating and Working with jobs:-

Job:-It is a process of running a particular task [IS, AS (or) SET of SQL Queries] at a stipulated time(Schedule time).

*The jobs can be “one time” running jobs (or)”Iterative” jobs.

*To work with a job SQL server agent should be in starting mode (it is SSMS).

Eg:- Running a file system package on every Monday at morning 9:00am.

Sol:-

Step Name: SSIS_STEP

Type: SSIS package

Package Source: File System

Package: Specify the Package location

Click OK

Schedule------>New

Name: Repeatable run.

Schedule type: Recuring

Occurs: weekly

Schedule type: Recuring

Occurs: weekly

Recourse every: 1 Week(s) on Monday

Click OK----->OK

Monitoring Job Status:- GO to the job----View History

To see the log information regarding the job

The job execution statistics in the file

Not:-THE BELOW SCHDULERS ARE FREQUENTLY USED IN REAL TIME.

Performance Tunning :-

For more than 2years of experience people this is mandatory concept and they must have good knowledge on this.

Situation to Go:-

1. To create a package with optimization

2. There is a package which is running such a long time

In the above situation we need to identify the “bottle Necks” and resolve it

This Bottle Necks can be at many levels

a. Package Level

b. Source level

c. Destination level

d. Transformation level

e. Data Flow Task level

f. System level

Identifying Bottle Necks:-

By using ”Progress tab” information (or) by single go providers we identify the bottle necks(Because they display step by step execution)

Package Level Tunning tips:-

a. Implement check points to have better restart ability of components in the package.

b. Disable event handlers:-

Event handlers decrease package performance

So, unnecessary event handlers should be removed (or) disabled.

c. Maximum concurrent executables:-

Increasing the no of executables will increase the parallelism of package and concurrently execute in less time

Maximum Error Count:-

Default’1’ means it fails for single error in the package. If you increase the error count it doesn’t fail the package until it reaches the count

2. Data flow task level tips:-

i. Delay validation:- (Its True/False)

True means the validations of component is delayed until the execution of other component finished

Description:- Until execute SQL Task 1 execution finishes “Execute SQL task 2”.Validation not started.

 

BLOB Temp Storage path:-

Specify this at the time of working with Binary Large objects such as images, media files…etc.

Default Buffer max. rows and size:-

Increase or decrease according to the volume of data loading i.e., for more volume increase rows and buffer size, for less volume, decrease rows and buffer size

Engine Threads:-

Default it takes ‘10’, if we increase more threads it runs more parallel and uses more processes

To finish the data flow operations.

Run In optimized Mode:- If it’s true the dataflow avoids unnecessary transformations, conservations etc

Operations usage

Source Level Tunning Tips:-

a. In case of Flat File

i. Try to take the flat file local to the system

ii. Use the property “Fast Parse = True”. So that the column uses faster, local neutral parsing routines and avoids unnecessary conversions

b. If the source is Table or view

i. Create Indexes on the source table so that it datives the data faster

ii. Instead of taking a table, take an SQL Query (or) SQL command as data access mode get the required columns and rows of data

4. Destination Level Tunning tips:-

a. In case of Flat File

i. Try to take the file local to the system

ii. In case of Relational Destination (Table or view)

i. use data access mode as SQL command to load any required rows and columns.

ii. Use data access mode as fast Load to load the data much faster.

c. The table contains constraints, Indexes, triggers, the loading will be slow .so we need to disable (or) drop them, later once the loading is finished recreate(or) enable them.

To implement this there are many ways.

 

Limation Duplicates from table:- (Only 2008)

Delete from Emp where Emp.%%physloc%%

Not in Cselect Min(%%physloc%%)from Rmp group by Eid

Physloc:- Physical Location of a row

There are 3 more ways to delete the duplicates

1.By using row number

2.BY using Rank/Dens Rank

3.By using Intermediate tables

Duplicate Records:-Select Eid,Ename in to Xfrom epm group b Eid ,Ename having count (*)>1

Loading with Multiple tables:-

To work with multiple tables:-

To work with multiple tables the below objects required

a.Sub Queries

b. Set operations

c. Joins

d. Views

e.Procedures

f.Functions

g.Triggers

h. Cursors

i. Extended Procedures

 

Note :-  *For 5th max salary taken n-1 = 5-1=4

*For top5 sal take 5(5>)

There are many ways to find out top sal & max salaries.

a.By using Rank() over(….)

Row –number ove(…..)

Dense-Rank() over(…..)

Top key word

Top:-It displays top values from the table

Syntax:-Select top (number)* (or)<columns> from <table name>

Ex:-

1.Display top 2 rows in the table

Select top (2)* from Emp

2.Display top 3 employees sal

Select top (3)*from party ORDER BY PARTYINCOME DESC

CTE(Commen Table Expressions)

It ie also called Temporary named result set, within the scope of an executing statement that can be used within a select/ insert/ Update/ delete/ create view/ merge statement

Syntax:-With<CTE NAME> <COLUMS>AS(QUERY)

Ex:-With XX(EID, EName)

As (Select Eid, Ename from Emp) Select* From XX

Note:-Insert , select statements use many time CTES

SET OPRATIONS:-

*It perform operations ROW WISE B/W result set

*It follows set theory and set operatars

UNION:- It merges rows from result set excluding duplicates

 

UNION ALL:-It takes result set from 1st data set excluding duplicate

INTRESECT:-Takes common rows from both data sets number of columns and order of columns should be same.

SSAS 

SQL SERVER ANALYSIS SEVICES

NEED OF ANALYTICAL APPLICATIONS:-

To create a multidimensional objects and to provide multidimensional analyais, we require analytical applications

EX:- COGNOS, BO ,MICROSTRATEGY ,SSAS, HYPERION, OBIEE. ETC

*SSAS, COGNOS, MICROSTRATEGY….etc tools are pure ‘Molop’ tools

STRUSTURE OF MULTIDIMENSIONAL OBJECTS:-

Generally dimension modeling structure(Star schema ,snow flake schema etc )used for multidimensional objects providing many-many relationship with E-R modeling is difficult.

MULTIDIMENSIONAL OBJECT USAGE(CUBE UAGE)

It provides support to end user (or) client by using various tools and applications

ANALYSIS SERVER COMPONENTS AND OPERATIONS:-

a.Cube designing

b.Cube creation

c. Writing calaulations

d.performing actions

e.Implementing KPIS(Key performance Indicators)

f.Providing Multianguage support using trans

h.Designing the Aggregations

i. Creating named calculation and queries

SAMPLE PROJECT

TEXTILE MANUFACTUFING INDUSTRY

Descriotion :-

A Textils manufactufing industry manufactures varioes types of shirts, Trousers. Etc….

Shirts also ae different varieties (Half hand, full hand slerves)etc…Trouses also with different varistise(full, Half, ¾ etc…)

*These product sale is happens different location in different time zones (time period)

*Various raw material equires are (chemical, oils, cloths….)to manufacture this product

*There are different stringe areas (go down ,Item keeping units) to maintain the product storage

*For this project client require an analytical solution where they can take appropriate decicioms on the business.

STEPS REQUIRED FROM END TO END TO IMPLEMENT THE ABOVE PROJECT

*Need to collect the decisions from major to minor

*according to the decisions analytical team collects or gathers the required business information from the project.

*Analysing the business requirements

*Identifying dimensions and facts for the above business and creating ….LDm’S (Logical data Modeling)

*Converting LDM’s to PDM’s (Physical data modaling)

*Based on the PDM’s we go for designing of the dimensional model(Star Schema , snow flake schema ….etc

*Writing code to the design document in analysis services for analytical application(BO, COUNDS, HYPERION……etc)

Note:- Except the last step remaining all step are platform independent

Conclusion:-

For the above busimess, technical team identifies(Datamodelers, Data designers, data architects, domain experts, SME’s(Subject modern experts) the below dimension and fact tables

Dimension and Fact tables

a.Time Dimension

b.product Dimension

c.Location Dimension

d.Raw material dimension

e.Item Storage unit dimension(iku dimension one fact table for the above all dimensions)

CUBE:-

It is an multidimensional object constructed with dimensions and facts in a particular design for taking multidimensional decision decisions

CUBE CREATION:

STEPS:-

1.Open BIDS

2.Crate Data source view

3.Create data source view

4.Provide relationship between dimensions and facts

5.Create a cube

6.Manipulate the components (Astion, KPI…….etc)

7.Deploy the cube

8.Browse cube (or) perform reconciliation (or) unit testing

Liminating Duplicates from table:- (Only 2008)

Delete from Emp where Emp.%%physloc%%

Not in C select Min(%%physloc%%)from Emp group by E id

Physloc: - Physical Location of a row

There are 3 more ways to delete the duplicates

1. By using row number

2. BY using Rank/Dens Rank

3. By using Intermediate tables

Duplicate Records: -Select E id, E name into X from Emp group by E id,

E name having count (*)>1

Pic-1

 

Loading with Multiple tables:-

To work with multiple tables:-

To work with multiple tables, the below objects required

a. Sub Queries

b. Set operations

c. Joins

d. Views

e. Procedures

f. Functions

g. Triggers

h. Cursors

i. Extended Procedures

Note :-  *For 5th max salary taken n-1 = 5-1=4

*For top5 sal take 5(5>)

There are many ways to find out top sal & max salaries.

a. By using Rank () over (….)

Row –number over (…..)

Dense-Rank () over (…..)

Top key word

Top:-It displays top values from the table

Syntax:-Select top (number)* (or) <columns> from <table name>

Ex:-

1. Display top 2 rows in the table

Select top (2)* from Emp

2. Display top 3 employees sal

Select top (3)*from party ORDER BY PARTYINCOME DESC

CTE(Common Table Expressions)     

It is also called Temporary named result set, within the scope of an executing statement that can be used within a select/ insert/ Update/ delete/ create view/ merge statement

Syntax:-With<CTE NAME> <COLUMS>AS (QUERY)

Ex: -With XX (EID, E Name)

As (Select E id, E name from Emp) Select* From XX

Note: -Insert, select statements use many times CTES

SET OPERATIONS:-

*It perform operations ROW WISE B/W result set

*It follows set theory and set operators

UNION: - It merges rows from result set excluding duplicates

UNION ALL: -It takes result set from 1st data set excluding duplicate

INTERSECT -Takes common rows from both data sets number of columns and order of columns should be same.

SSAS  (SQL SERVER ANALYSIS SEVICES)

NEED OF ANALYTICAL APPLICATIONS:-

To create a multidimensional object and to provide multidimensional analysis, we require analytical applications

EX:- COGNOS, BO ,MICROSTRATEGY ,SSAS, HYPERION, OBIEE. ETC

*SSAS, COGNOS, MICROSTRATEGY….etc tools are pure ‘Molop' tools

STRUCTURE OF MULTIDIMENSIONAL OBJECTS:-

Generally dimension modeling structure (Star schema, snow flake schema etc) used for multidimensional objects providing the many-many relationship with E-R modeling is difficult.

MULTIDIMENSIONAL OBJECT USAGE (CUBE UAGE)

It provides support to end user (or) client by using various tools and applications

Pic-2

 

 

 

ANALYSIS SERVER COMPONENTS AND OPERATIONS: -

a. Cube designing

b. Cube creation

c. Writing calculations

d. performing actions

e. Implementing KPIS (Key performance Indicators)

f. Providing Multilanguage support using trans

h. Designing the Aggregations

i. Creating named calculation and queries

SAMPLE PROJECT

TEXTILE MANUFACTURING INDUSTRY

Description:-

A Textile manufacturing industry manufactures various types of shirts, Trousers. Etc….

Shirts also are different varieties (Half hand, full hand sleeves) etc…Trousers also with different varieties (full, Half, ¾ etc…)

*These product sale is happening different location in different time zones (time period)

*Various raw materials require are (chemical, oils, cloths….) to manufacture this product

*There are different storage areas (go down, Item keeping units) to maintain the product storage

*For this project client require an analytical solution where they can take appropriate decisions on the business.

STEPS REQUIRED FROM END TO END TO IMPLEMENT THE ABOVE PROJECT

*Need to collect the decisions from major to minor

*according to the decisions analytical team collects or gathers the required business information from the project.

*Analyzing the business requirements

*Identifying dimensions and facts for the above business and creating ….LDm'S (Logical data Modeling)

*Converting LDM's to PDM's (Physical data modeling)

*Based on the PDM's we go for designing of the dimensional model (Star Schema, snowflake schema ….etc

*Writing code to the design document in analysis services for analytical application (BO, COUNDS, HYPERION……etc)

Note: - Except the last step remaining all step are platform independent

Conclusion:-

For the above business, technical team identifies (Data modelers, Data designers, data architects, domain experts, SME's (Subject modern experts) the below dimension and fact tables

Dimension and Fact tables

a. Time Dimension

b. product Dimension

c. Location Dimension

d. Raw material dimension

e. Item Storage unit dimension(LKU dimension one fact table for the above all dimensions)

CUBE:-

It is a multidimensional object constructed with dimensions and facts in a particular design for taking multidimensional decision decisions

CUBE CREATION:

STEPS:-

1. Open BIDS

2. Create Data source view

3. Create data source view

4. Provide relationship between dimensions and facts

5. Create a cube

6. Manipulate the components (Action, KPI…….etc)

7. Deploy the cube

8. Browse cube (or) perform reconciliation (or) unit testing

Pic-3

 

 

Pic-4

 

 

PRACTICAL IMPLEMENTATION OF CUBE

1. Open BIDS

2. File--->New------>Project-------> Template----->Analysis services

Project----->Project Name

Name: TEXTILES-CUBE

Location: C: Documents and settings Vinayaka

Solution name: TEXTILES-CUBE

3. View----> Solution Explorer [ok]

4. Create two Data Sources DS-Textiles, DS-Textiles 2 with the below procedure

Data Sources----->RC----->New------> SERVER NAME---->LOCAL HOST

*Select or enter data base name

LOCAL HOST: TEXTTILES--------> OK----->NEXT----->

*Inherit------->Data Source name: DS-Text files---->Finish

Like this create another data source DS-Text files 2

5. Data Source views----->RC----->New Data Source view----->Next----->

Relational data sources

DS-text files 1 ------->select ------>next------>

*Create logical relationships by matching columns

Next----->Select Available objects

RAW MATERIAL                RAW MATERIAL

LOCATION                            LOCATION

IKU                                         IKU

I

NEXT------->

NEME: DEV-TEXT -------->FINISH

6. GOTO DSV_CUBE_DB, for taking remaining (TIME, PRODUCT, TEXT FACT) TABLES in its it follow this process,

DSV_CUBE_DBDESIGN------->RC---->ADD/REMOVE tables

Data Source: DS_textfiles 2

Available objects:          Included object

TIME                                      TIME

PRODUCT                              [>] PRODUCT

TEXT_FACT                           TEXT_FACT

CLICK OK

7. PROVIDE RELATIONSHIP BETWEEN FACT TABLES TO REMAINING DIMENSION TABLES BY DRAGGING -Drop column mappings from fact table columns to dimension while connecting from fact column to dimension column it displays a message, click OK

The destination table of the newly created relationship has no primary key defined. Would you like to define a logical primary key based on the columns used in this relationship?

After all dimension columns connections,

DS_CUBE_DB------>RC------>Arrange tables, then it looks like this

Pic-5

 

8. CUBES------->RC----->NEWCUBE---->NEXT------->

*Using existing tables------>next----->

Measure group tables

Pic-6

 

 

Now various tabs opened and we can see the cube structure as well [FACT IN YELLOW, DIMENSIONS IN

BLUE COLOR] **

Build---->Deploy ----->TEST the cube

Note: -

Important options: -

a. Build ------->Deploy: -If cube structure changed in BIDS to have the same in cube database, this option useful

b. Build---->Process: - If data sources data and structure changes to have the same in cube database, this options useful

c. Build---->Build Solution:

It takes the required set up files in the solution folder

AFTER DEPLOYMENT:-

We need to ensure the cube is deployed successfully to do this follow the below two general approaches

a. In BIDS, go to cube browse try to analysis and see the data

b. Source any table data should match the cube database table data

Ex: - No of rows in source (Text_fact) data 40 rows)

SELECT [Measures].[text fact count] ON Columns

FROM [Text files_cube]

(40 rows)

Fire the above query in the below navigation

SSMS------>Analysis Services----->Text file---> Cube----->RC---->MDX------>Query

GENERAL ERRORS IN THE LAB:-

1. If we are using other than Dimension key column Values in Fact table to the corresponding key, you may get error because of foreign key violation

Eg: -Assure there is a LOCATION table with the below locations

"HYD

MUM

USA"

If we are using other than these locations in the fact tab then w get errors

USING THE CUBE DATABASE: - There are many ways

a. Analyzing the cube database in BIDS browser

b. Using "Pivot table" in Excel applications to connect and work with cube database

c. Using Reporting tools (Cognos, Bo, SSRS…….) to general reports

d. By writing MDX queries I the cube database

e. Using "data proclarity" tool to analysis the data

ANALYSING IN THE BIDS BROWSER:-

*Take dimensions (or) Facts either row-wise (or) column-wise and analysis

* Go to menu bar on the top for filtering the data in the browser

This bar can also be called as "FILTER BAR"

Eg:- Take actual cost, Estimated cost on column wise and location ID, Product ID, Raw material ID on row wise and analysis

Pic-7

 

 

TO SEE THE DATA IN THE DIMENSIONS /FACTS

There are two different ways

a. Go to Data Source view-------->Select table ----->RC----->Explore Data

b. Go to Cube Structure------>Select table ----->RC------->Explore Data

WORKING WITH CUBE STRUCTURE

*It displays cube design, measure groups, measures, dimensions etc……..

*We preview the data here for dimensions & facts

MEASURE GROUP:-

It contains collection of measures

*Default measure group table is 'fact table' of the cube

*we can add new measure group tables.

Adding new measure group:-

1. Take measure group table in Data source view

Pic-8

 

Note: -Now 2 Measure group table are available in the Cube

MEASURE:-

*It is the numerical presentation value in fact table

*It describes business information

*May be simple value (or) Aggregated value (SUM, AVG, MIN, MAX ETC……..)

Eg: - Taking SUM (ACTUAL COST) as a measure to the measure group table

Pic-9

 

 

 

ADDING CUBE DIMENSION:-

1. Add the table (xx) in the data source view

2. Solution Explorer---->Dimensions---->RC---->

  Pic-10

 

 

 

 

 

EDIT DIMENSION:

1. We edit dimension to manage attributes and to create hierarchies

a. Taking all attributes to display in browser & Analysis

Pic-11

 

 

B. Creating Hierarchies:-

*It is designed to provide top down and bottom up analysis

*While analyzing we can drill down for deep dive, we can drill up for high-level information

*Hierarchies contain multiple levels and members

*Each hierarchy should have ‘2' levels

Pic-12

 

 

 

CREATING TIME HIERARCHY:-

Time----->Edit dimension------->Drag year, Qtr, Month one by one to hierarchy section & Rename the hiera

  Time-hierarchy
Year
Qty
Month

 

 

SAVE-->Deploy

To see the hierareley usage:-

Go to cube browser-------->

Take time-Hierareley I browserpane, take location producte dimension attributes, Actual Cost, estimate measures and see the time hierarcly  drill door

NEW LINKED OBJECT:-

*This wizard is useful to link measure groups and dimensions I another analysis services database or cube to the current database or cube

*Linked objects appear the same to users as other measure groups and dimensions in the cube

*We can also we use its wizard to import KPIS, Calculations and Actions

Eg: - Importing a calculation (Eg : -Sum cost)  from another cube(text cube2)

New linked object------>Next---->

Analysis Services data sources------>

New data Source---->Next----->New----->

Server (or) file name: LOCALHOST

Pic-13

 

 

 

 

Now the calculations are imported, Deploy and use

WORKING DIMENSION USAGE WIZARD

*This wizard is useful to add, remove dimensions and their relationship

*We can add dimensions to another dimension (or) measure group table

Relationship Types

1. No Relationship

2. Regular

3. Fact

4. Referenced

5. Many –to-Many

6. Data mining

NO RELATIONSHIP

*The dimension and measure group are not related

*The dimension available in the cube but at the time of analysis it doesn't participate with its values

   Pic-14

 

 

 

 

Eg: - Removing the relationship between product dimension and measure group table

Pic-15

 

 

 

 

Deploy, Go to browser and analysis now the product dimension doesn't participate in analysis

REGULAR RELATIONSHIP:-

*Here dimension adds to a measure group table directly (Generally represents (Star Schema Structure)

*When we are going for this, we should have proper key relationships between dimension & FACT

Eg: - Adding Product Dimension to the measure group table

Pic-16

 

 

Measure Group:-

Pic-17

 

Select Relationship Type: Regular

Granularity attributes: PRODUCT ID

Dimension TABLE: PRODUCT

Measure Group table: TEXT_FACT

  Dimension Columns                     Measure Group Columns

                  *Product ID                                        *Product ID

                                                                OK

Deploy-----> Go to Browser & analysis, now product table participates in the analysis

REFERENCED RELATIONSHIP

*The dimension table is joined to an intermediate dimension table, which in turn, is joined to the fact table

*It Provides ‘Snow Flake' Schema type of structure

*We require appropriate column references between

Pic-18

 

 

Eg: -Connecting sub-product dimension to a product dimension

Pic-19

 

Select relationship type: Referenced

Reference Dimension: Product

Intermediate Dimension: Raw Sub product

Reference Dimension attribute: Product ID

Intermediate Dimension attribute: Sub Product Id

OK

Deploy

FACT RELATIONSHIP:-

*The dimension is the fact table here

*Generally textual information represents dimension information and numeric information represents fact information in this table

*These types of cubes are called "Standalone cube

    Pic-20

 

 

 

MANY-TO-MANY RELATIONSHIP:-

*The dimension table is joined to an intermediate fact table. The intermediate fact table is joined, inturn, to an intermediate dimension table to which the fact table is joined.

Pic-21

 

 

DATA MINING RELATIONSHIP:-

*The target dimension is based on a mining model built from the source dimension

The source dimension must be included in the cube

   Pic-22

 

 

 

MDX [MULTIDIMENSIONAL EXPRESSIONS]:-

*To work with Normal two –dimensional applications, two-dimensional programming languages are enough(c, c++, .Net……etc)

*To work with Two-dimensional databases, two-dimensional query language SQL is enough (Oracle SQL, T-SQL, Tera data SQL….etc)

*To work with multidimensional databases the above specified or not enough. So we go for a separate expression and Query Language 'MDX'

IMPORTANT TERMINOLOGY IN MDX

A. Member: Dimension attribute is called Member

*Syntax:-

[Dimension table Name]. [Attribute Name]

Ex:-[Product]. [Product Name]

[Location]. [Location Name]

MEASURE:

Fact attribute is called Measure

Syntax:[Measures].[Measure name]

Ex:  [Measures]. [Actual cost]

[Measures]. [Estimated cost]

TUPLE:- Collection of Measures or Members is called Tuple

a. Starts with (

b. Ends with)

Example:-[Measures]. [Actual cost]. [Measures. Estimated cost])

SET:-Group of Tuples are called as SET

a. Starts with {

b. Ends with}

Ex:  {

([Measures ].[Actual/ cost], [Measures. Estimated cost]),

([Measures].[Actual cost], [Measures. Estimated cost])

}

MDX USAGE IN SSAS:-

a. For creating Cubes, KPI's, Actions, Calculation, Partitions etc……Objects.

b. MDX usage in other applications, such as

i. Hyperion Eassy base

ii. SAP Net viewer

MDX QUERY:-

1. Generaly MDX Queries we write in Analysis Services cube database

2. For Retrieving data from cube data base we use select. Statements

Syntax:

Select {Measures/ Members} ON Columns,

{Measures/ members} on Rows

From<Cube name> where<condition>;

SOME FUNCTIONS IN MDX AND THEIR MEANINGS:

There are two types of functions

1. SOME FUNCTIONS TAKES PARAMETERS

EX: TOP COUNT, BOTTOM COUNT, ISEMPTY etc (TOP COUNT), (BOTTOM COUT)

2. FUNCTIONS WITHOUT PARAMETERS

Ex:- *MEMBERS, *ALL MEMBERS,*CHILDREN,*PREV MEMBER,*CURRENT MEMBER etc….

*MEMBERS: -It displays the child members without including Total

*ALL MEMBERS:-Display all members and their Total

*PREV MEMBER:-Display the current cell member

*CURRENT MEMBER: -Display the current cell member value

 

FUNCTIONS WITH PARAMETERS:-

ISEMPTY:-It verifies whether the member is empty or Not

Syntax:- Is Empty(set)

TOP COUNT: -Display Top values

Syntax:- Top count (Set, value)

BOTTOM COUNT:-

Display bottom count of values

Syntax:- Bottom count(Set, value)

FILTER:-It filters the given set based on the condition

Syntax:- Filter(Set, condition)

ORDER: - It displays the set by keeping a descending ascending on the given column

DISTINCT ():-It displays the set values

DISTINCT ({SET})

WORKING WITH HIERARCHIES:-

We refer ‘hierarchies’ member values in two ways.

a. [Dimension]. [Hierarchy]. [Members]

b. [Dimension]. [Hierarchy]. [Level].[members]

NOTE:- If we do not specify the level, it display all member values

CROSS JOIN:-

CROSS JOIN ({Set}. {Set}) (OR) {Set+}*{Set+}

IMPORTAMT MEX QUERIES:-

NAVIGATION:-

SSMS------>Analysis services----->TEXTILES_CUBE------>RC----->NEW QUERY--->MDX

QUERIES:-

1. DISPLAY FIRST MEASURE SUN

Syntax:- Select from[TEXTILES_CUBE}(CUBE NAME)

2. DISPLAY NO.OF ROWS, I THE CUBE

Syntax:- Select [Measures]. [TEXTILES_CUBE]

3. DISPLAY ALL BRANCHES ACTUAL COST

Syntax:-Select [Measures].[Actual cost] on columns, [Product]. [Band] on rows from [DSV_textiles_cube]

Actual cost

9800

4. DISPLAY BRANCHES AND THERE ACTUAL COST

Syntax:-

Select [Measures]. [Actual cost] columns, [Product].[brand]. Children on rows from [Textile_cube]

5. DISPLAY ALL BRANCH ACTUAL COST AND SUM OF ALL ACTUAL COST

Syntax:-

Select [Measures]. [Actual Cost] on columns, [Product].[Brand].ALL MEMBERS on Rows

From [TEXTILES_CUBE]

 

6. DISPLAYING EVERY RAW MATERIAL AND LOCATION, THEIR ACTUAL COSTS:-

Syntax:-

Select ([Measure].[Actual cost], [Measures]

[Estimated cost].[Raw material ID] on columns,

({Raw material].[Raw material ID].children

{Location].[Loc Name].children) on rows

From [TEXTILES_CUBE]

7. DISPLAY THE FIRST RAW MATERIAL ACTUAL COS:

Select [measures].[Actual cost] on

Columns.[Raw material].[Raw material ID]

First child on rows from [TEXTLE_CUBE]

8. DISPLAYING TOP TWO VALUES OF THE LOCATION

Syntax:-

Select [Measures]. [Actual cost]on

Columns, Top count ([Location].[Loc Name]

Children,2) on rows from[TAXTILE_CUBE]

9. DISPLAYING BOTTOM TWO LOCATION VALUES

Syntax:-Select [Measures].[Actual cost] on column

Bottom count ([Location].[Loc name].children, 2

On rows from [TEXTILE_CUBE]

10. DISPLAY THE LOCATION TOOLS Actual cost is > 1000

Syntax:-

Select [Measures]. [Actual cost] on columns

Filter ([Location]. [Loc name]. children,

[Measures] . [Actual cost]>1000)on rows

From [TEXTILE_CUBE]

11. DISPLAY THE DATA IN ACTUAL COST SORTE D OEDER IN ASC

Syntax:-Select [Measures]. [Actual cost] on columns, order ([Location]. [Location name].

Children [Measures].[Actual cost], Asc] on rows

12. DISPLAY THE CROSS PRODUCT OF LOCATION PRODECT AS WELL AS THEIR ACTUAL, ESTIMATED COST

Syntax:-

Select {[Measures]. [Actual cost]

[Measures]. [Estimated cost]} ON

Columns, cross join ([Location].[Loc name]

Children,[Product] . [Product name] children) on rows from [TEXTILE_CUBE]

(OR)

Select {[Measures]. [Actual cost] . [Measures]

[estimated cost]} on columns,

([Location]. [loc name].children)*

([Product . {Product name]. children on rows from[TEXTILE_CUBE]

13. DISPLAY DISTINCT FROM DISTINT WISE VALUES ANND LOCATION WISE

Syntax:-

Select {[Measures Actual cost ], [Measures . Actual cost], [Measure

[Estimated cost]} on columns, DISTINT

([LOCATION] . [LOCATION NAME] . Children, [product]

14. DISPLAY 2009 YEAR ACTUAL COST SUM

Syntax:-

Select [Measures] . [Actual cost] on columns

From [textile_cube] where [Time] . [Year]

& [2009]

(OR)

Select [Measures].[Actual cost] on columns

[Location]. [Loc name]. children on

Rows from [DSV_TEXFILES_CUBE] Where

{[Time] . [Year] & [2009]}

CONDITIONAL Expressions

IF:-

IF (<condition>, success stmt, failed stmt)

Eg: IF ([Measures].[Actual cost]-[Measures].

[Estimated cost] >0, 1, 0)

CASE:-

Evaluates against multiple conditions

Case:

When<condition 1>then<Statement 1>

When<condition 2> then<Statement 2>

When<Condition 3> then<Statement 3>

When<Condition 4> then<Statement4>

Else <Statement 5>

END

Ex:-

Case when [measures] . [Actual cost]-[Measures]

[Estimated cost]>0 then -1/when [Measures]

[Actual cost]-[Measures] . [Estimated cost] <0 than 1

When [Measures] . [Actual cost]-[Measures].

[Estimated cost]=0 then 0

END

CALCULATIONS:-

These are the intermediate operations perform between data source to cube data base

*These are created at BIDS level

*It uses DX syntax commands to perform operations

*It supports different operators, set functions, methods, members etc in its calculations

We write Calculations in two ways

1. FORM VIEW

2. SCRIPT VIEW

 

FORMVIEW:-

Here one by one calculation prepared Manual

Eg: -Creating a calculated member, finding the difference between Actual & estimated costs

Navigation:-

Pic-23

 

Name: Diff cost

Parent Hierarchy: Measures

Expression: [Measures] . [Actual cost] – [Measures] [Estimated cost]

FORMAT String: Standard

Visible: True

Associated measure group: (undefined)

Color Expression

For color: 10711080 /*Blue*/

Back color: 12632256/*Silver*/

Click ok------->Save

Build------>Deploy, Go to Browser, select Actual cost, Estimated cost, Diff cost and see the result

SCRIPT VIEW:-

We write scripting for creating multiple calculations one by one if we are having hands on experience in writing coding

In script view, it is easy to move the calculation from one environment to another environment

(DEV------>Test, Test-----> Production)

Syntax:-

CALCULATE:

CREATE MEMBER<MEMBER NAME>AS<EXPRESSION>;

<SETTINGS>

Eg:-

CALCULATE;

CREATE MEMBER CURRENT CUBE.[MEASURES ].SUM COST as

[MEASURES].[ACTUAL COST]+[MEASURES].[Estimated cost],

FORMAT_STRING = "STANDARD",

BACK-COLOR=123452/*SIMVER*/,

FORE_COLOR=432345/*BLUE*/

VISIBLE=1;

CALCULATED MEASURE: -If the parent hierarchy is measured, then the calculated members is called "calculated Measure"

**NON-EMPTY BEHAVIOR:-

It IS USED TO RESOLVE THE Non-empty Queries in MDX while calculations.

*If the Non-empty behavior Property is blank the calculation must be evaluated repeatedly to determine a member is empty.

*The Non-Empty behavior Property contains the name of the measure the calculation is empty

is measure is empty

*Generally we go for Ratio calculations

Eg:-

There is a calculation ACIEC, If ‘EC" is empty it throws an error. In this situation if we set up non-empty behavior Property to ‘EC' then system avoids calculations and displays the calculated results are empty if

The EC is empty

Additional Expressions:-

1. Display Actual cost as ‘99999', if it’s empty

Syntax: If (Is empty ([Measures].[Actual cost])

99999, [Measures].[Actual cost]

2. Display in time Hierarchy each member and its participation in the entire actual cost sales

Syntax

([Measures].[Actual cost], [TIME].[TIME].[TIME-HIERAR] &[1])/([MEASURES].[ACTUEL COST],

[TIME].[TIME-HEIRARCHY].& [All])

3. Display Actual cost of the member to its parent

Syntax

([Measures].[Actual cost],

[TIME].[TIME-HIER. & [1])/

([MEASURES].[ACTUAL COST]

[TIME].[TIME-HIER].PARENT)

ACTIONS:-

These are performed at the time of events

*Generally, the event is "Clicking" the cell

*There are 3 Types of actions

1. Statement (or) URL (or) Record set action

2. Drill through Actions

3. Report Action

1. URL ACTION:-

Calling the URL, while analyzing the data is called URL action

Eg: -Going to the locations website, when we click location attribute value

Navigation:-

Action------>New Action

Name: URL_Action

Action Target

Target type: Attribute numbers

Target object:[Location].[Loc name

Acton context

Type: URL

Action Expression: HTTP://www.ALL LOCAT.COM

*Event is clicking ‘Location', Action is opening the URL

Build---->Depoly ---->Go to cube browser, highlight

Location-----> RC----->Click Action (URL-Action)

2. REPORT ACTION:-

*Calling a report while analyzing the data is called Report Action

*It requires report server name, report name (server name and their report name

Eg:- COGNOS, BO, HYPERION….etc

Navigation:-

Action----->New Action

Name: Report_Action

Action Target

Target type: Attribute members

Tartget object: Location. Loc name

Report Server

Server Name: ROWAN: 8080

Report Path:  Report-Server_DHW/------

Report Format: HTM 5

Build------>Deploy---->Go to cube browser----->

RC---->Location------>Click Report Action

Note: -It takes parameters to display the parameterizes report content

C.DRILL THROUGH ACTION:-

*Drilling to other columns (or) for opening a separate Analytical window for the required columns, we must go for drill through action

*Simply when we are moving from one analytical to another analytical window for required details, we go for drill through Action

Eg:-Drilling through locations, Time, Raw materials, Dimensions while Browsing the Actual cost

NAVIGATION:-

Action----> New drill through Action

Name: Drill through_Action

Action Target

Measure group members: Text-fact

Drill through columns:

Dimensions   Return columns

Measures   Actual cost

Location   Loc Name

Time    year

Raw material   Raw material/Function

Build----->Deploy----->go to Browser-----> Reconnect----->

RC-----> Actual cost ----->click Drill through Action

PERSPECTIVE

*It is helpful to provide Limited visibility to cube objects

*In this case, what are all the dimensions, fact, KPL's, calculations….etc

You want to provide visibility to the use take in perspectives

*Now, the users see only the selected dimension facts, KPIS, calculations

Navigation:-

PERSPECTIVE-------->NEW PERSPECTIVE-------->

CHANGE PERSPECTIVE NAME, CHECK (OR)

UNCHECK the required objects and items

Eg:[*] TEXTILE FACT  []TIME

[*]PRODUCT   [] LOCATIONS

Build------>Deploy------>go to cube Browser------>

Change the perspective and see the Limited items (or) objects

Note:-

To provide 2 fact tables and only 5 dimensions visibility to a particular USER in the fire fact tables and 50 dimensions perspectives are useful

TRANSLATIONS

This is helpful to provide Multilanguage support

*In 2000 the translation and Multilanguage support provided at web page level. But from 2005 onwards this support is at BIDS level

NAVIGATION:

TRANSLATION------->NEW TRANSLATION----->

SELECT LANGUAGE TELUGU (INDIA)----->Click OK

Eg:-

OBJECT    DESCRIPTION

TEXT FACT   VILUVALA PATRIKA

ACTUAL COST   ASALU DHARA

ESTIMATED COST COCHIN CHIN DHARA ETC

Build------>Deploy------>Go to browser------>and change the language to select Telugu---->

See the result

Build------>Deploy: - If the cube structure changes in BIDS level, to effect the same in cube data base

Level

Build----->Process (Processing data): - The data (or) Structure in the data sources changed, to effect the same in cube data base

Pic-24

 

 

 

 

KPI (KEY PERFORMANCE INDICATOR)

*These are the important business information representation

* Generally we go for KPI, whenever we want to represent the high-level and important business information in Graphical items

Eg: of Graphical items:-

GUAGE

REVERSED GAUGE

THERMOMETER

CYLINDER

FACES

VARIANCE ARROWS etc……

STANDARD ARROWS:-

DOWN

LINEAR

UP

TRAFFIC SIGNAL NUMERIC:-

RED ------> -1

YELLOW------> 0

GREEN -------> 1

Eg: - Display in Red color, If the Actual cost is greater than (>) estimated cost

Display in green color, If Actual cost is less than (<) estimated cost,

Display in yellow color, if both are equal in a graphical item traffic signal

Navigation:

KPI----->NEW KPI---->

NAME: Sales_KPI

ASSOCIATED MEASURE GROUP: TEXT-FACT

VALUE EXPRESSION:[MEASURES].[ACTUAL COST]

GOAL EXPRESSION: [MEASURES].[ESTIMATED COST]

STATUS

STATUS INDICATOR: TRAFFIC LIGHT

STATUS EXPRESSION:

CASE

WHEN [MEASURES. [ACTUAL COST]-[MEASURES].

[ESTIMATED COST] >0 THEN -1

WHEN [MEASUES].[ACTUAL COST]-[MEASURES]

[Estimated cost] <0 Then 1

WHEN [MEASURES].[ACTUAL COST]-[MEASURES]

[ESTIMATED cost] = 0 then ‘0'

END

TREND INDICATOR: Standard Arrow

TREND EXPRESSION: <same as status Expression>

Build----->Deploy

Go to 'KPI BROWSER VIEW' and then see the result

PARTITIONS:-

Real-time usage:-

Multiple partitions will process the data more parallel

*we can run the specified partition to process the required data (so, that Limited system resources will be utilized)

While creating Partitions we go for data binding between partitions to the table data

*There are ‘2' types of bindings available

a. Table Binding

Here the fact table measure group directly bound to the partition

b. Query Binding:-

Here based on the query the partition is created

Note:-In Real-time we use Query binding than table binding

Storage settings:-

While creating the Partitions, we must specify either of the below storage settings

a. ROLAP (Relational OLAP)

In this case data and aggregations store under relational sources

b. MOLAP (Multidimensional OLAP)

Here data and aggregations store under multi dimensional sources

c. HOLAP (Hybrid OLAP)

Here data stores under relational sources and aggregations store under multidimensional sources

Pic-25

 

 

PROACTIVE CACHING:-

This feature helps the cube in sink with the relational data sources

*It takes the latency time, schedule time and event table to capture the changes from relational data sources to cube databases

CREATING TABLE BINDING PARTITION:-

By default a table is bonded to the partition in this case the table is FACT TABLE. along with this Fact Table Binding if you want to add more FACT TABLE bindings(The cube is having many fact tables) we go for the below process.

Navigation:

Partition------->New partition------->Click Next

Measure group: Text Fact

Available Tables:*Text-fact------->

CLICK NEXT------>SPECIFY THE QUERY TO

Restrict rows and remove where condition

Next------->Processing Location

CURRENT SERVER INSTANCE

Storage. Location------->Click Next ------>Name: Table

Text fact- Partition----->Click Next finish

Build----->deploy

WORKING WITH QUEY BINDING:-

*This mechanism we use frequently in real time.

*Generally we create partitions based on the frequency of data processing and its columns

*Assume we are processing the data into fact table based on ‘OK' and "NOK' flags, then create partition on those columns

1. Creating ‘OK' partition: -

a. Delete the existing table partition

b. Partitions------>New partition click------>

Measure group: Text FACT

Available tables

Text_fact

*Specify a query to restrict rows

SELECT----------

----------------

From [dbo], [Text_Fact]

Where [dbo].[Text_FACT].[IKU]='OK'

Next---à

 

*Current Server instance------>Next

Name: Ok_FACT

Aggregation option

*Design aggregations later Finish

2. Like above process creates ‘OK' Partition with below change:

PIC-25

 

 

 

Processing Partitions:-

There are two ways

a. Build Menu--->Process

b. Partitions----->Select Partition/Partitions----->RC----->Process

Processing FACT TABLE:-

If the data (or) Structure in fact table changed to effect the same in cube database level we go for fact processing

Navigation:-

Partitions tab ----->Select Partition---->RC----->Process

Fact Processing options:-

a. Process default

b. Process full

c. Process Data

d. Process Incremental

e. Process Index

f. Process

 

 

 

DIMENSION PROCESSING:-

If dimension table structure (or) data changes in data sources to effect the same in cube database we go for dimension processing

Navigation:-

VIEW---->SOLUTION EXPLORER ----->DIMENSIONS----->SELECT DIMENSION (eg:TIME)---->

RC---->PROCESS

PROCESSING OPTIONS: -

a. Process Default

b. Process full

c. Process data

d. Process Index

e. Process Update

f. Un process

PROCESSING OPTIONS FOR OLAP OBJECTS:-

The object that you can process in SSAS are database, cube, Measure group, Partition, Dimension, mining structure, and mining model among these objects, only dimensions partitions and mining structures store data

When you process an object, the server creates a processing plan

Pic-26

 

 

 

Note:- ‘Process Add' is not available I dimension and (fact) Partition processing option

PROACTIVE CACHE PRACTICAL IMPLEMENTATION

1. Create a table binding partition

2. Go to Strong settings

 

Specify

*Standard setting

Automatic MOLAP

*Custom setting

options

------>Click----->Storage mode: MOLAP

*Enable proactive caching

General

Cache settings

[*]Update the cache when data change

Silence interval                                    20        seconds

Silence override interval   20       Seconds

Notification

Specify tracking table

[dbo].[text_fact]

Click OK

Build------>Deploy

Observations:-

--------->Go to browser----->take some fields and see grace total

*Add some rows in source "Text_Fact" and after 20 seconds if you use cube automatically process and grand total changes

Note: -No manual intervention

 

 

 

 

 

UNDERSTANDING STORAGE MODES:-

Pic-27

 

 

 

Pic-28

 

ModeQuery time
Automatic

MOLAP

The default silence interval is set to 10 seconds. As a result, the server will not react if the data change catches are fewer than 10 seconds apart. If there is not a period of silence, the server will start processing the cache in 10 minutes.
Schedule MOLAP

 

 Same as MOLAP except that the server will process the partition on a daily schedule.
MOLAP

 

The partition storage mode is standard MOLAP. Proactive caching is disabled. You need to process the partition to refer the data.

 

 

PROACTIVE CACHING

UNDERSTANDING   PROACTIVE    CACHING

As noted, with MOLAP and HOLAP storage modes, SSAS caches data (MOLAP storage mode only) and aggregations (both MOLAP and HOLAP) on the server.

When you take a data “snapshot” by processing a cube, the data becomes outdated until you process the cube again. The amount of OLAP data latency that is acceptable will depend on your business requirements.

In some cases, your end users might require up to date or even real-time information.

A new feature of SSAS 2005, PROACTIVE CACHING, can help you solve data latency problems

 

TIPS   FOR   EXAMS

Proactive caching is especially useful when the relational database is transaction oriented and data changes at random.

When data changes are predictable -------

Step 1.such as when you use an extract, transform, and LOAD (ETL) process to Load data.

2. consider processing the cube explicitly. When the data source is transaction oriented and you want minimum latency, consider configuring the cube to process automatically by using proactive caching.

HOW    PROACTIVE   CACHING   WORKS

When you enable proactive caching, the server can listen for data change notifications and can update dimensions and measures dynamically in an "AUTOPILOT" mode.

 

STEADY STATE

IN steady mode, no changes are happening to the relational data.

Step 1:-

Client applications submit multi-dimension expressions (MDX) queries to the cube, please check in diagram before page.

Step 2:-

The cube satisfies the Queries from an MOLAP cache. The server listens for a data change notification event, which could be one of the following three types.

SQL SERVER:-

This option uses the Microsoft server trace events that the relational engine raises when data is changed (SQL SERVER 2000 and later).

CLIENT INITIATED:-

IN this case, a client application notifies SSAS when it changes data by sending a notify table change XML for analysis (XMLA) command.

SCHEDULED POLLING:-

With this option, the server periodically polls the required tables for changes.

UNSTEADY STATE:-

STEP 3: -  At some point a data change occurs in the data source as shown in the figure.

Step 4: - This change triggers a notification event to SSAS server.

Step 5: - The server starts two stopwatches. The silence interval stopwatch measures the time elapsed between two consecutive data change events. This will reduce the number of false starts for building the new cache until the database is quiet again.

For example:-

If data changes are occurring in batches, you do not want to start rebuilding the cache with each data change event. Instead, you can optimize proactive caching by defining a silence interval that allows a predefined amount of time for the batch changes to complete.

After data in the relational database is changed, the server knows that the MOLAP cache is out of date and starts building a new version of the cache.

Step 6: - The latency stopwatch specifies the maximum latency period of the MOLAP cache, the administrator can also predefine the maximum latency period.  During the latency period, queries are still answered by the old MOLAP cache.

When the latency period is exceeded, the server discards the old cache. While the new version of the MOLAP cache is being built, the server satisfies client queries from the ROLAP database.

Step 7: - when the new MOLAP cache is ready, the server activates it and redirects client queries to it. Proactive caching enters a steady state again until the next data change event takes place.

NAMED QUERY:-

  1. These queries are helpful to construct a table with customized query. Here advantages are

a)      Restriction of rows and columns

b)      Multiple columns from multiple tables

 

  1. The name of the query result acts like a table name
  2. This option available in data source view

 

 

 

 

 

 

 

Navigation:-

Ok

Now   connect to this resulted table to the cube.

NAMED CALCULATION:-

  1. Like named queries, named calculations will also improve performance [because both run at source level]
  2. It is recommended in real time that if we are using a calculated value as permanent
  3. Instead of doing calculations at cube level, if the calculations are at the source level, an always better performance the query gives.

Ex: - creating actual cost increment field, by increasing 12% of actual cost.

 

 

 

 

 

 

Navigations:-

IMPLEMENTING SCENARIO DIMENSION

  1. This is designed to implement “Write Back” option.
  2. Here the data written back to the data sources.
  3. Scenario dimension performs many operations, one among the operation is “ WRITE BACK”.

Steps:-

  1. Create a table in data source (ex: -party)
  2. Add the table in data source view
  3. Dimensions---->new dimension----->

Use an existing table.

Next

Data source view: Texttiles1_DB

Main table: PARTY

Key columns: PARTY ID, PARTY NAME

 

Dimension structure:-

 

 

Once you change your mouse location to another row the row saves in the party table.

 

Applying security:-

  1. SSAS BIDS level it supports ‘role based’ security.
  2. A user or group assigns to a role, each role contains set of responsibilities (objects privileges – access, read, write).

 

 

 

Ex:-

Ex: -creating two roles (Developer-SSAS, Tester-SSAS with different objects and different privilege, and observing the result.

  1. solutions explorer----->Roles------->RC----->New Role------>

General

Role name: developer-Role-SSAS

Role description: - developers have full privileges

 

Enter the object names to select VINAY----->Check names----->ok

 

Note:-

In SS2005, even we select full control, we can edit objects and its security. That means the tabs (data sources, cubes------) enabled for operations.

In 2008, full control option disables remaining tabs.

  1. Role----->new Role------>

General

Role name: Tester-Role-SSAS

Role Description: No privileges to tester or

Add ---->MADHU---->check names------->OK

 

Data sources

  1. Deploy, GOTO CUBE BROWSER, Reconnect
  2. Click change user
  3. O ROLES-----> SELECT---->TESTER-ROLE-SSAS  and observe

Various ways of deployment

There are many to deploy cube objects or cubes.

a)      BIDS (BUSINESS INTELLIGENCE DEVELOPMENT  STUDIO)

b)      Deployment wizard

c)       XMLA script

d)      Synchronize  database wizard

e)      Backup and restore

f)       Analysis management objects (AMO)

 

          OPTION                             Recommended Use
           BIDSDeployment wizard

XMLA script

Synchronize database wizard

Backup and restore

 

 

Analysis management objects(AMO)

Deploying the latest changes to your local server for testingDeploying to a test or production environment when you need more granular control

Scheduling a deployment task

Synchronizing two cubes, such as a staging cube and a production cube

Moving a cube from one server to another

 

Handling deployment programmatically.

 

 

 

 

 

a)      BIDS:-

Solution explorer------>project (Text-fact) -------->RC----------->properties-----> Build

Output parts:- bindeployment

Options------>processing option: default---->transactional deployment: False

Target

Server: <server name to be deployed>

Database: Text-fact-cube-DB

Ok

b)      Deployment wizard:-

 

Start ---->programs----->Microsoft SQLSERVER 2008------>Analysis service--->deployment wizard

---->next----->

Database file:c:VINAYAKAText filesbinText files. As database----> next----->

Server: <server name to be deployed>

Database: Text files------>next---->

Partitions O deploy partitions

Roles and members O deploy roles and retain members --->next

Now the “analysis service database” deploys in the specified in the server

Note: - This wizard deploys (as a database) analysis service database file, as created in the output directory by project build.

c)       Using XMLA script:-

  1. XML A refers to “XML for analysis services”.
  2. This is the frequently used mechanism.
  3. If contains cube name, server name, dimension, fact tables and their attribute names etc---- objects information. This information is useful to modify/add current/new settings.

 

Navigation:-  (creating an XML A script file)

SSMS----> Analysis services ------> cube database name (ex:-Text files-I) ----------> RC ----->script database as-----> create to------>File-----> specify file name and path to store XML A script------>save

GOTO the file & monitor the details.

D)  Using XMLA script:-

  1. GOTO XMLA script file.
  2. Do the changes whatever required, save file.
  3. SSMS----> open---->file----->specify XMLA  script file path---->open.
  4. Execute the script.
  5. Observe the cube database in the specified server according to the settings.

E) BACKUP and RESTORE:-

a) BACKUP:- SSMS----->Analysis services---->cube database---->RC Backup----->

Backup file: 9AM -new abf

Password: VINAY

Confirm password: VINAY          -------> OK

b)   RESTORE:-  Analysis services----->Databases---->RC------>Restore----->

Backup file: Browse to 9AM_New abf

Restore Database: VINAY- 9AM_NEW

Password: VINAY

SSAS        PERFORMANCE     TUNING

  1. OPTIMIZE CUBE AND MEASURE GROUP DESIGN: -

a) Define cascading attribute relationships for example Day > month > Quarter > year and define user hierarchies of related attributes (called natural hierarchies) within each dimension as appropriate for your data.

b) Remove redundant relationships between attributes to assist the query execution engine in generating the appropriate query plan. Attributes need to have either a direct or an indirect relationship to key attribute, not both.

c) keep cube space as small as possible by only including measure groups that are needed.

d)  place measures that are queried together in the same measure group. A query that retrieves measure groups requires multiple storages engine operations.

e) Minimize the use of large parent-child hierarchies.

In parent-child hierarchies, aggregations are created only for the key attribute and the top attribute and the top attribute.

(for example, the all attribute) unless it is disabled.

f) optimize many to many dimension performance, if used. When you query the data measure group by the many dimension, a run time “join” is performed between the data measure group and the intermediate measure group using the granularity attributes of each dimension that the measure groups have in common.

 

2. DEFINE EFFECTIVE AGGREGATIONS: -

a. Define aggregations to reduce the number of records that the storage engine needs to scan from disk to satisfy a query.

b. Avoid designing an excessive number of aggregations. Excessive aggregations reduce processing performance and may reduce query performance.

c. Enable the analysis services query log to capture user query patterns and use this query log when designing aggregations

3. USE PARTITIONS: -

a. Define partitions to enable analysis services to query fewer data to resolve a query when it cannot be resolved from the data cache or from aggregations. Also define the partitions to increase parallelism when resolving queries.

b. For optimum performance, partition data in a manner that matches common queries. A very common choice for partitions is to select an element of time such as day, month, quarter, a year or some combination of time elements.

c. In most cases, partitions should contain fewer than 20 million record size and each measure group should contain fewer than 2,000 total partitions. Also, avoid defining partitions containing fewer than two million records.

Too many partitions cause as slow down in metadata operations, and too few partitions can result in missed opportunities for parallelism.

d. Define a separate ROLAP partition for real-time ROLAP partition in its own measure group.

4. WRITE EFFICIENT MDX: -

a. Remove empty tuples from your result set to reduce the time spent by the query execution engine serializing the result set.

b. Avoid run time checks in an MDX calculation that result in a slow execution path. If you use the case and if functions to perform condition checks which must be resolved many times during query resolution, you will have a slow execution path.

Rewrite these queries using the scope function to quickly reduce the calculations space to which an MDX calculation refers.

c. Use non-empty-Behavior where possible to enable the query execution engine to use bulk evaluation mode. However, if you use non-empty-behavior incorrectly, you will return incorrect results.

d. Use EXISTS rather than filtering on member properties to avoid a slow execution path. Use the non-empty and exists functions to enable the query execution engine to use bulk evaluation mode.

e. Perform string manipulations within analysis services stored procedures using server-side ADOMD.NET rather than with string manipulation functions such as Str to member and str to set.

f. Rather than using the lookup cube function, use multiple measure groups in the same cube wherever possible

g. Rewrite MDX queries containing arbitrary shapes to reduce excessive sub queries where possible.

For example:-

The set { (Gender.Male, customer.USA), (Gender. Female, customer. Canada)} is an arbitrary set.

You can frequently use the Descendants function to resolve arbitrary shapes by using a smaller number of sub queries than queries than return the same result that are written using other functions.

h. Rewrite MDX queries that result in excessive prefetching where possible. Prefetching is a term used to describe cases where the query execution engine requests more information from the storage engine than is required to resolve the query at hand for reasons of perceived efficiency.

5. USE THE QUERY ENGINE CACHE EFFICIENTLY

a. Ensure that the analysis services computer has sufficient memory to store query results in memory for reuse in resolving subsequent queries.

b. Define calculations in the MDX script calculations in the MDX script have a global scope that enable the cache related to these queries to be shared across sessions for the same set of security permissions.

c. Rewrite MDX queries containing arbitrary shapes to optimize caching.

For example

In some cases, you can rewrite queries that require non-cached disk access such that they can be resolved entirely from the cache by using sub select in the from clause rather than a WHERE clause. In  other cases, a WHERE clause may be a better choice.

6. Ensure flexible aggregations are available to answer queries.

a. Note that incrementally updating a dimension using process update on a dimension drops all flexible aggregations affected by updates and deletes and by default, does not re-create them until the next full process.

b. Ensure that aggregations are re-created by processing affected objects, configuring lazy processing, performing process Indexes on affected partitions, or performing full processing an affected partition.

  1. TUNE MEMORY USAGE

a. Increase the size of the paging files on the analysis services server or add additional memory to prevent out-of-memory errors when the amount of virtual memory allocated exceeds the amount of physical memory on the analysis services server.

b. Use Microsoft Windows Advanced server or datacenter server with SQL server 2005 enterprise edition(or SQL server 2005 Developer edition) when you are using SQL server 2005(32-bit) to enable analysis services to address up to 3 GB of memory.

c. Reduce the value for the memory/Low memory limit property below 75 percent when running multiple instances of analysis services or when running other applications on the same computer.

d. Reduce the value for the memory/total memory limit.Property below 80 percent when running multiple instances of analysis services or when running other applications on the same computer.

e. Keep a gap between the Memory/Low Memory Limit property and the Memory/Total Memory Limit property – 20 percent is frequently used.

f. When query thrashing is detected in a multi-user environment, contact micro soft support for assistance in modifying the memory heap type.

g. When running on non-uniform memory access (NUMA) architecture and virtual ALLOC takes a very long time to return or appears to stop responding, upgrade to SQL server 2005 SP2 and contact micro soft support for assistance with appropriate settings for pre-allocating NUMA memory.

8. TUNE PROCESSOR USAGE

a. To increase parallelism during querying for servers with multiple processors, consider modifying the thread poolQueryMax threads and thread poolprocessmax thread options to be a number that depends on the number of server processors.

b. A general recommendation is to set the thread poolQueryMax threads to a value of less than or equal to two times the number of processors on the server.

For example

If you have an eight- processors server, the general guideline is to set this value to no more than 16.

c. A general recommendation is to set the thread poolprocessmax threads options to be a value or less than or equal to 10 times the number of processors on the server. This property controls the number of threads used by the storage engine during querying operations as well as during processing operations.

For example:-

If you have an eight- processor server, the general guideline is setting this value to no more than 80.

 

9. SCALE-UP WHERE POSSIBLE: -

a. Use a 64-bit architecture for all large systems.

b. Add memory and processor resources and upgrade the disk I/O subsystem, to alleviate query performance bottlenecks on a single system.

c. Avoid linking dimensions or measure groups across servers and avoid remote partitions whenever possible because these solutions do not perform optimally.

10. SCALE-OUT WHEN YOU CAN NO LONGER SCALE-UP

a. If your performance bottle neck is processor utilization on a single system as a result of a single system as a result of a multi – user query workload, you can increase query performance by using a cluster of analysis is services servers to service query requests.

Requests can be load balanced across two analysis services servers, or across a larger number of analysis services servers to support a large number of concurrent users (this is called a server farm). Load balancing clusters generally scale linearly.

b. when using a cluster of analysis is services servers to increase query performance, perform processing on a single processing server and then synchronize the processing and the servers using the XML A synchronize statement, copy the database directory using Robo copy or some other file copy utility, or use the high speed copy utility, or use the high speed copy facility of SAN storage solutions. [/sociallocker]

 

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *

Protected by WP Anti Spam