copa_scenario.doc

48
Introduction Introduction "An in-memory database) is a database management system that primarily relies computer data storage". SAP HANA: High-Performance Analytic Appliance (HANA)is an In-Memory Database data and analyze large volumes of non aggregated transactional data in Real- performance ideal for decision support & predictive analysis. The In-Memory Computing Engine is a next generation innovation that uses structures and algorithms leveraging hardware innovation as well as innovations. It is ideal for Real-time OLTP and OLAP in one Transactional to high performance Analytics. SAP HANA can also be used as a accelerate analytics on existing applications. In-memory Technology In-memory technology moves data and information sources from remote databases the results of analyses and transactions are available immediately. The elements of in-memory computing are not new. However, economics and software technology innovations have made it possible to with in-memory business applications. The cost of main memory has decreased significantly. It is now cost enterprise in main memory. The SAP HANA Appliance is a combination of in - partner hardware that allows you to query multiple types of sources at before. All data are kept in main memory and can be processed at an incredible speed. HANA's real-time platform combines high-volume transactions with analytics to take your business performance to the next level. The HANA in-memory database can help your applications zero-in on the wasting time sifting through irrelevant data. The result: Instant answers better decision making across your enterprise. With optimized loading routines, system data can be restored quickly in case HANA Appliance can fail over to a cold standby server to guarantee high availability. What is SAP HANA? SAP HANA is an exciting new technology brought to you by SAP. At its core it uses an innovative in- memory technique to store your data that is particularly suited for handling very large amounts of tabular, or relational, data with unprecedented performance.. Can you imagine the increase in efficiency if your application could just read around those unwanted fields and access the information that is really required? If this style of data storage were used you would experience a significantly faster response from your database or application. SAP HANA allows you to read around unwanted data by organizing tables in this efficient columnar manner. In addition to the common row-oriented storage schema a column- oriented data storage layout can be used. This means your application does

Upload: sravs-dals

Post on 29-Sep-2015

18 views

Category:

Documents


0 download

TRANSCRIPT

IntroductionIntroduction"An in-memory database) is a database management system that primarily relies on main memory for

computer data storage".

SAP HANA: High-Performance Analytic Appliance (HANA)is an In-Memory Database from SAP to store

data and analyze large volumes of non aggregated transactional data in Real-time with unprecedented

performance ideal for decision support & predictive analysis.

The In-Memory Computing Engine is a next generation innovation that uses cache-conscious data-

structures and algorithms leveraging hardware innovation as well as SAP software technology

innovations. It is ideal for Real-time OLTP and OLAP in one appliance i.e. E-2-E solution from

Transactional to high performance Analytics. SAP HANA can also be used as a secondary database to

accelerate analytics on existing applications.

In-memory TechnologyIn-memory technology moves data and information sources from remote databases into local memory so

the results of analyses and transactions are available immediately.

The elements of in-memory computing are not new. However, dramatically improved hardware

economics and software technology innovations have made it possible to realize The Realtime Enterprise

with in-memory business applications.

The cost of main memory has decreased significantly. It is now cost effective to store all data of a large

enterprise in main memory. The SAP HANA Appliance is a combination of in -memory software and SAP-

partner hardware that allows you to query multiple types of sources at speeds an d in volumes as never

before. All data are kept in main memory and can be processed at an incredible speed.

HANA's real-time platform combines high-volume transactions with analytics to help create solutions that

take your business performance to the next level.

The HANA in-memory database can help your applications zero-in on the information you needwithout

wasting time sifting through irrelevant data. The result: Instant answers to your complex queries, and

better decision making across your enterprise.

With optimized loading routines, system data can be restored quickly in case of power failures. The SAP

HANA Appliance can fail over to a cold standby server to guarantee high availability.

What is SAP HANA?

SAP HANA is an exciting new technology brought to you by SAP. At its core it uses an innovative in- memory technique to store your data that is particularly suited for handling very large amounts of tabular, or relational, data with unprecedented performance..Can you imagine the increase in efficiency if your application could just read around those unwanted fields and access the information that is really required?

If this style of data storage were used you would experience a significantly faster response from your database or application. SAP HANA allows you to read around unwanted data by organizing tables in this efficient columnar manner. In addition to the common row-oriented storage schema a column-oriented data storage layout can be used. This means your application does not have to wait for the database to fetch data that it does not need as all the data in a table column is stored in an adjacent manner.

But what if your database system already caches all data in RAM, in fast accessible main memory close to the CPU? Would a column-oriented memory layout still speed access up? Measurements conducted at SAP and at the Hasso Plattner Institute in Potsdam have proven that reorganizing the data in memory column-wise brings a tremendous speed increase when accessing a subset of the data in each tab le row. As SAP HANA caches all data in memory, hard disks are rarely used in the system they are only needed to record changes to the database for permanent persistency. SAP HANA keeps the number of changes to your dataset as small as possible by recording every change as delta to your original dataset. Data is not modified in place but inserted or appended to a table column. This provides several advantages, not just speed of access. As all of the old data is retained, your applications can effectively time-travel through data providing views of the data as it has changed over time.Contemporary database applications separate data management and applications in two distinct

architectural layers, the database and application layer. This separation forces data to travel from the

database to the application before it can be analyzed or modified. Sometimes very large amounts of data

have to travel from one layer to another. SAP HANA avoids this common bottleneck by locating data

intensive application logic to where the data is, which is in the database itself. To enable this embedding

of application logic in the database SAP has invented an extension to standard SQL (Structured Query

Language) named SQLScript. SQLScript allows programming of data intensive operations in a way such

that they can be executed in the database layer. SQLScript allows you to extend SQL queries to contain

high level calculations thereby extending the data processing capabilities of the database.

SAP HANA SAP HANA is a flexible, data-source-agnostic appliance that enables customers to analyze large volumes of SAP ERP data in real-time, avoiding the need to materialize transformations. SAP HANA is a hardware and software combination that integrates a number of SAP components including the SAP HANA database, SAP LT (Landscape Transformation) Replication Server, SAP HANA Direct Extractor Connection (DXC) and Sybase Replication technology. SAP HANA is delivered as an optimized appliance in conjunction with leading SAP hardware partners. SAP HANA database The SAP HANA database is a hybrid in-memory database that combines row-based, column-based, and object-based database technology. It is optimized to exploit the parallel processing capabilities of modern multi-core CPU architectures. With this architecture, SAP applications can benefit from current hardware technologies. The SAP HANA database is the heart of SAPs in-memory technology offering, helping customersSAP HANA LandscapeThe following two figures show the SAP HANA landscape from an internal and an external view.The first figure shows the components that make up SAP HANA, including the SAP HANA database and SAP HANA studio.

Basic Concepts behind HANA DatabaseMain memory is no-longer a limited resource, modern servers can have 2TB of system memory and this

allows complete databases to be held in RAM. Currently processors have up to 64 cores, and 128 cores

will soon be available. With the increasing number of cores, CPUs are able to process increased data per

time interval. This shifts the performance bottleneck from disk I/O to the data transfer between main

memory and CPU cache.

In-Memory Database:

HANA fully leverages the hardware innovations like Multi-Core CPU, High capacity RAM availability. The

basic concept is to cache the entire database into fast accessible Main Memory close to CPU for faster

execution and to avoid disk I/O. Disk storage is still required for permanent persistency since Main

Memory is volatile. SAP HANA, holds the bulk of its data in memory for maximum performance, but still

uses persistent storage to provide a fallback in case of failure. Data and log are automatically saved to

disk at regular save points, the log is also saved to disk after each COMMIT of a database transaction.

Disk write operations happens asynchronously and as a background task. Generally on system start -up

HANA loads the tables into memory.

Massively Parallel Processing:

With availability of Multi-Core CPUs, higher CPU execution speeds can be achieved. Multiple CPUs call

for new parallel algorithms to be used in databases in order to fully utilize the computing resources

available. HANA Column-based storage makes it easy to execute operations in parallel using multiple

processor cores. In a column store data is already vertically partitioned. This means that operations on

different columns can easily be processed in parallel. If multiple columns need to be searched or

aggregated, each of these operations can be assigned to a different processor core. In addition

operations on one column can be parallelized by partitioning the column into multiple sections that can be

processed by different processor cores. With the SAP HANA database, queries can be executed rapidly

and in parallel.

Hybrid Data Store:

Common databases store tabular data row-wise, i.e. all data for a record are stored adjacent to each

other in memory. Row store tables are linked list of memory pages. Conceptually, a database table is a

two-dimensional data structure with cells organized in rows and columns. Computer memory however is

organized as a linear structure. To store a table in linear memory, two options exist:

A row-oriented storage stores a table as a sequence of records, each of which contains the fields of one row. A column-oriented storage stores all the values of a column in contiguous memory locations.

Use of column store will help to prevent table scan of unnecessary columns while performing searching

and aggregation operations on single column values stored in contiguous memory locations. Such an

operation has high spatial locality and can efficiently be executed in the CPU cache. With row-oriented

storage, the same operation would be much slower because data of the same column is distributed

across memory and the CPU is slowed down by cache misses. Column store is optimized for high

performance of read operation and efficient data compression. This combination of both classical and

innovative technologies of data storage and access allows the developer to choose the best technology

for their application and, where necessary, use both in parallel.

OLTP and OLAP Database:

HANA is a hybrid database, having both read optimised column store ideally suited for OLAP and write

optimised row store best for OLTP systems relational engines. Both the stores are In -Memory. Using

column stores in OLTP applications requires a balanced approach to insertion and indexing of column

data to minimize cache misses. The SAP HANA database allows the developer to specify whether a table

is to be stored column-wise or row-wise. It is also possible to alter an existing table from columnar to row-

based and vice versa.

Basic Concepts

Impact of Modern Hardware on Database System ArchitectureHistorically database systems were designed to perform well on computer systems with limited RAM, this had the effect that slow disk I/O was the main bottleneck in data throughput. Consequently the architecture of those systems was designed with a focus on optimizing disk access, e. g. by minimi-zing the number of disk blocks (or pages) to be read into main memory when processing a query.Computer architecture has changed in recent years. Now multi-core CPUs (multiple CPUs on one chip or

in one package) are standard, with fast communication between processor cores enabling parallel

processing. Main memory is no-longer a limited resource, modern servers can have 2TB of system

memory and this allows complete databases to be held in RAM. Currently server processors hav e up to

64 cores, and 128 cores will soon be available. With the increasing number of cores, CPUs are able to

process increased data per time interval. This shifts the performance bottleneck from disk I/O to the

data transfer between CPU cache and main memory .

Traditional databases for online transaction processing (OLTP) do not use current hardware efficiently. It was shown in 1999 by Alamaki et al. that when databases have all data loaded into main memory the CPU spends half of its execution time in stalls, i. e. waiting for data to be loaded from main memory into the CPU cache.So, what are the ideal characteristics of a database system running on modern hardware?In-memory database. All relevant data is available in main memory. This characteristic avoids the performance penalty of disk I/O. With all data in memory, techniques to reduce disk I/O like disk-based indexes are no longer needed. Disk storage is still required for permanentpersistency, for example in the event of a power failure.Cache-aware memory organization. The design must minimize the number of cache misses

and avoid CPU stalls because of memory access. A general mechanism to achieve this is to

maximize the spatial locality of data, i. e. data that should be accessed consecutively should be

stored contiguously in memory. For example search operations in tabular data can be

accelerated by organizing data in columns instead in rows.

Support for parallel execution. Higher CPU execution speeds are nowadays achieved by

adding more cores to a CPU package. Earlier improvements resulted from applying higher

packing densities on the chip and optimizing electronic current paths. The speed advancements

available using these tech-niques have, for the most part, been exhausted. Multiple CPUs call

for new parallel algorithms to be used in databases in order to fully utilize the computing

resources available.

Columnar and Row-Based Data StorageAs mentioned above columnar storage organization in certain situations leads to fewer cache misses and therefore less CPU stalls. This is particularly useful when the CPU needs to scan through a full column. For example this happens when a query is executed that cannot be satisfied by index search-es or when aggregate functions are to be calculated on the column, e. g. the sum or average.Indexed rows are the classical means to speed up row-based table access. This works well for column stores, but index trees have a low spatial locality and therefore increase cache misses significantly. In addition to this, indexes have to be reorganized whenever data is inserted into atable. Therefore database programmers have to understand the advantages and disadvantages of both storage tech-niques in order to find a suitable balance.Conceptually, a database table is a two-dimensional data structure with cells organized in rows

and columns. Computer memory however is organized as a linear structure. To store a table in

linear memory, two options exist as shown in figure 2. A row-oriented storage stores a table as

a sequence of records, each of which contains the fields of one row. Conversely, in a column

store the entries of a column are stored in contiguous memory locations.

Figure 2: Row and column-based storageThe concept of columnar data storage has been used for quite some time. Historically it was mainly used for analytics and data warehousing where aggregate functions play an important role. Using column stores in OLTP applications requires a balanced approach to insertion and indexing of column data to minimize cache misses.The SAP HANA database allows the developer to specify whether a table is to be storedcolumn-wise or row-wise. It is also possible to alter an existing table from columnar to row - based and vice versa.Column-based tables have advantages in the following circumstances: Calculations are typically executed on single or a few columns only. The table is searched based on values of a few columns. The table has a large number of columns. High compression rates can be achieved because the majority of the columns contain only few distinct values (compared to number of rows).Row based tables have advantages in the following circumstances: The application needs to only process a single record at one time (many selects and/or updates of single records). The application typically needs to access a complete record (or row). The columns contain mainly distinct values so that the compression rate would be low. Neither aggregations nor fast searching are required. The table has a small number of rows (e. g. configuration tables).To enable fast on-the-fly aggregations, ad-hoc reporting, and to benefit from compression

mechanisms it is recommended that transaction data is stored in a column-based table. The

SAP HANA data-base allows joining row-based tables with column-based tables. However, it is

more efficient to join tables that are located in the same row or column store. For example,

master data that is frequently joined with transaction data should also be stored in column -

based tables.

Controlling & Profitability Analysis in SAP HANA

Profitability Analysis (CO-PA) enables you to evaluate market segments, which can be

classified according to products, customers, orders or any combination of these, or

strategic business units, such as sales organizations or business areas, with respect to

your company's profit or contribution margin.

The aim of the system is to provide your sales, marketing, product management and

corporate planning departments with information to support internal accounting an d

decision-making.

Two forms of Profitability Analysis are supported: costing-based and account-based.

Costing-based Profitability Analysis is the form of profitability analysis that groups

costs and revenues according to value fields and costing-based valuation

approaches, both of which you can define yourself. It guarantees you access at

all times to a complete, short-term profitability report.

Account-based Profitability Analysis is a form of profitability analysis organized in

accounts and using an account-based valuation approach. The distinguishing

characteristic of this form is its use of cost and revenue elements. It provides you

with a profitability report that is permanently reconciled with financial accounting.

You can also use both of these types of CO-PA simultaneously.

Scenario CO-PAYou are at a customer site and have been asked to build an Information Model for HANA for the purpose of displaying CO-PA data. You have been asked to produce the following 3 reports. This section takes you through the process to create the models to be used later in the reporting exercise.Note the details of the CO-PA tables are covered in the presentation. You will only build a model for report 3.

Create the Attribute Views1.

Please navigate in the Information Modeler to your own package.

NOTE: Please make sure you select your own package.2. Create a new Attribute View:

Select the Attribute View of your package and using RightMouse Button Menu New > Attribute

View to create new Attribute View.

Enter the name of the Attribute View: LOCATION_XX and description, Customer Location XX. Select Standard Attribute

View Type. Press next.

3.

From the EIM360 schema select the table KNA1 to build the Attribute View.

Enter KNA1 then press the filter button.

Proceed to select KNA1 and press the Add button to move the

table into the Selected area. Repeat the above steps to add T005U. Click Finish to get into the

Attribute View Editor

Tip: In case the dialog below "New Attribute View" disappeared, you can press the "Add table" and continuethe table selection.4.

From the EIM360 schema select the table KNA1 to build the Attribute View.

Enter KNA1 then press the filter button.

Proceed to select KNA1 and press the Add button to move the table into the Selected area.Repeat the above steps to add T005U. Click Finish to get into the

Attribute View Editor

Tip: In case the dialog below "New Attribute View" disappeared, you can press the "Add table" and continuethe table selection.

5.

The Attribute View editor gets populated with the selected tables.

This is where you will define the relationships of the attributes.6.

User will now select the fields for use in the attribute view

Select the key field KUNNR and right mouse click on field. Choose Add as Key Attribute.

Result: The KUNNR:KNA1.KUNNR shows in the Output frame.7.

Display the properties of KUNNR. Select KUNNR from the Output frame.

On the Property Tab for field KUNNR, select Description Mapping and choose the field EIM360.KNA1.NAME1 from the

drop down list.8.

Join field LAND1 from KNA1 to LAND1 of T005U as a text table join.

Click on the join and change the join type in the property view. Text Table (Join Type) 1..1 (Cardinality). Select SPRAS for the Language Column.9.

Join field REGIO from KNA1 to T005U field BLAND.

As before set the Join type to Text Table, 1..1 (Cardinality) . Select as Language

Column -> SPRAS

Save.10.

Add a second text table to the attribute view.

You can do it by selecting the "Add table tton and continue the table selection. Enter table T005T and

select the EIM360 table from the list. Click OK.

Result: Table T005T is added to the Data Foundation tab.

11.

Define a text join between field LAND1 from KNA1 to field LAND1 from T005T

Select the join and change the join type in the property view.

At the join Property frame select as Language Column -> SPRAS

HINT: Dont forget to make the join type = textTable.12.

Add KNA1.LAND1 as an attribute -> right click on LAND1 -> Add as Attribute.Select LAND1 from the Output frame

13. On the Property Tab for field LAND1, select Description Mapping and choose the field EIM360.T005T.LANDX

14. Add KNA1.REGIOas an attribute ->

right click on KNA1.REGION ->

Add as Attribute

15. Select KNA1.REGIO from the

Output.On the Property Tab for field REGIO, select Description Mapping and choose the field EIM360.T005U.BEZEI. Save.16. Add ORT01 as an attribute.

This is a description field for CITY so in this case there is no need to map to a text table. Save.17.

On the Attribute View LOCATION_XX, right click and select Activate.

Result: The Deployment Log will show successful or give an error message.

18.

Choose Attribute View, LOCATION_XX. Right click to choose Data Preview.

19. Next create a new Attribute

View for product.

Name it PRODUCT_XX with description PRODUCT_XX. Join table EIM360.MARA

and table EIM360.MAKT

Define the join as a text join on the properties tab.

Define the language column as SPRAS on the Property Tab.

HINT: Define the description mapping for the key attribute MAKT.MAKTX. Save, Activate & Preview.

Create the Analytical View for Actuals1. Close all open views prior to creating a new analytical view.

Create an analytical view CEA1_XX with the description Contribution Margin for Actuals.

2. Find a table, CE1IDEA and click Add.

Remember you have to find the table by clicking on the arrow next to the input field

Important Do not click finish or you will have to start over. Click Next.

3. Navigate to your package, expand the folder and select the attribute views for PRODUCT_XX and LOCATION_XX. Click on Finish.4. Choose the Data Foundation Tab.

Result: You will see table CE1IDEA in your data foundation tab.5. Navigate to the Logical View Tab to see the two attribute views.

Note that the Data

Foundation is empty.

It will be filled once you select your fields (attributes and measures)

6. Navigate to the Data Foundation Tab. Select the two fields

KNDNR and ARTNR from CE1IDEA and add them as attributes (right click).7. Navigate to the Logical Tab. Notice that the Data

Foundation now has two fields.

Join DataFoundation.ARTNR to PRODUCT_xx.MATNR.

Join DataFoundation.KNDNR to LOCATION_xx_KUNNR.8. Navigate to Data Foundation Tab, where we will define filters.

Apply filters on the fields PALEDGER and VRGAR (record type).

Select PALEDGER > right click > Apply Filter.oChoose Operator Equal from the Apply Filter dialog box

o Specify the value 01.

For VRGAR, choose Operator Equal and specify the value F. Save.9. Notice there is now a filter icon next to the PALEDGER and VRGAR.10. Next select the private attributes to be

included in the analytical view.

Choose from CE1IDEA the following fields in addition to KNDNR and ARTNR:o PERIO,o VKORG,o PLIKZ.HINT: To find the fields quickly enter the name of the filed in the Find Column field.

11. Next select the measures to be included in the analytical view.

Choose from CE1IDEA the following fields: (Right click on the field > Add ass measure)

o VV010 o VV070 o VV290 o VV960

12. Rename the measures in the Name field of the Property Tab for each measure.

Use copy and paste from the table to ensure that use the exact descriptions.

Change FromChange To

VV010GrossRevenue

VV070SalesDeduction

VV290ProductionVariance

VV960OtherExpenses

Save.13. Next define the calculated measures to be included in the analytical view. Right click on Calculated

Measures and choose New.14. Create the first Calculated Measure, NetRevenue.-

Select Decimal data type with length 15,0

-

Double click on the desired measure for it to appear in the expression editor. Either type in the minus sign or double click on the Operator. Continue to create all other needed Calculated Measures.NameDescriptionData TypeLength

NetRevenueNet RevenueDECIMAL15GrossRevenue - SalesDeduction

CM1Contribution MarginDECIMAL15NetRevenue - ProductVariance

1

CM2Contribution Margin

2DECIMAL15CM1 - OtherExpenses

Save and Activate your Analytical View.15. Display the data in your Analytical View. Right click on Analytical

View and select Data Preview

4.3 Create the Analytical View for Plan16. Close all open views prior to creating a new Analytical view. Create an analytical view

CEP1_XX the with description

Contribution Margin for Planning.17. Find a table, CE2IDEA and click Add.

Remember you can find the table by clicking on the arrow next to the input field

Click Next.

Important Do not click finish or you will have to start over.18. Navigate to your package, expand it and select attribute views for PRODUCT_XX and LOCATION_XX. Click on Finish.19. Choose the Data Foundation Tab.

You will see CE2IDEA in the data foundation tab.20. Choose the Logical View Tab to see the two attribute views created previously. Note that the Data

Foundation is empty.21. Navigate to the Data Foundation Tab again.

Add KNDNR and ARTNR as attributes (right click).22. Navigate to the Logical Tab again.

Notice the Data Foundation now has two fields.

Join fields ARTNR and

MATNR. Join KNDNR and KUNNR.23. Navigate to Data Foundation Tab, where we will define further selections.

Apply filters on the fields PALEDGER and VRGAR (record type).

Select PALEDGER > right click > Apply Filter.o Choose Operator Equal from the

Apply Filter dialog box

o Specify the value 01.

For VRGAR, choose Operator Equal and specify the value F. Save.24. Notice there is now a filter icon next to the PALEDGER and VRGAR.25. Next select the attributes to be included in the analytical view.

Choose from CE2IDEA the following fields including KNDNR & ARTNR:o PERBL,o VKORG.26. Next select the measures to be included in the analytical view.

Choose from CE2IDEA the following fields:

o VV010001 o VV070001 o VV290001 o VV960001

27. Rename the measures in the Name field of the Property Tab for each measure.

Use copy and paste to ensure the exact names.

Change FromChange To

VV010001GrossRevenue

VV070001SalesDeduction

VV290001ProductionVariance

VV960001OtherExpenses

Save.28. Next define the calculated measures to be included in the analytical view. Right click on Calculated

Measures and choose New.29. Create the first Calculated Measure, NetRevenue.-

Select Decimal data type with length 15, 0

-

Double click on the desired measure for it to appear in the expression editor. Either type in the minus sign or double click on the Operator. Continue to create all other needed Calculated Measures.NameDescriptionData TypeLengthFormula

NetRevenueNet RevenueDECIMAL15GrossRevenue - SalesDeduction

CM1Contribution Margin

1DECIMAL15NetRevenue -

ProductionVariance

CM2Contribution Margin

2DECIMAL15CM1 - OtherExpenses

30. You should have the following

Calculated Measures.

31. Save and activate your Analytical view.Preview the Analytical view as well to see if you get data.Creation of the Calculation View Steps (Graphical) Select Right Clickcreate a new Calculation ViewName you calculation view CAV_GRA_XX Specify Graphical for the type of the Calculation View, Click Next Since we will be reading from existing Analytical Views you do not have to select any tables ClickNext. Select the previously created 2 Analytical Views Click Finish From the Tools Palette select 2 Projection graphical nodes one for each Analytical View.This is where you will set the Actual versus Planned data indicator Click on each Projection node and rename each to Projection_A and Projection_P Add a Union Graphical node With the mouse pointer hover over each graphical node and draw a connection line between the nodes Add a new Calculated Column called KPLIKZ In the diagram select the Projection_A node. Within the Output right click on CalculatedColumns > New Enter KPLIKZ for the name of the field. Select INTEGER and enter 0 in the ExpressionEditor as the Planned Indicator value. Click Add Add all the fields of Actual Analytic view fields to the Output of the Projection_A node Select the Projection A node, within the details view select all the fields > Right Cl ick > Add to OutputCreating Calculation View (Graphical)1. Proceed to work on the Projection_P node. As before add the field KPLIKZ but this time set the Expression value to 1 as an indication for planned data2. Proceed to select and work with the Union node. Within the details of the UnionCNTRL select all the fields from the Projection_A node > Right Click > Add to Target3. Scroll down and select all the fields on the Projection_P node. Since we want to combine the 2 analytical views we need to map the fields from both views to each other. Right Click > Map to Target. You can also select each individual field on the left and drag& drop it over the corresponding field on the right hand side & Save4. Click on the Output Node in the main diagram. Within the Details view pane selecteach field > Right Click >And the field as a Attribute or as Measure accordingly & Save.5. Create a Script based Calculation View Right click on Calculation view > New > Calculation View Enter CAV_SCR_XX for the name of the Calculation vi ew.Select SQL Script, Click Finish6. In the left side of the diagram double click on the Script Node. Before entering code into the middle section click on the structure icon within the Outputs section in the right side of the screen7. Enter all the output fields accordingly, including the corresponding Data Type andLengthHANA SQL CODE /* Actual */ SQLA_VIEW = CE_OLAP_VIEW ( "XXXXXX/XXX_XX", [ "MATNR", "KUNNR", "REGIO", "LAND1", "ORT01", "PERIO", "VKORG", "GrossRevenue", "SalesDeduction", "ProductionVariance", "OtherExpenses", "NETREVENUE", "CM1", "CM2" ] ); SQL_A = CE_PROJECTION ( @SQLA_VIEW@, [ "MATNR", "KUNNR", "REGIO", "LAND1" AS "LANDX", "ORT01", "PERIO", "VKORG", CE_CALC('0',VARCHAR(4)) AS KPLIKZ, "GrossRevenue" AS GROSSREV, "SalesDeduction" AS SALESDEC, "ProductionVariance" AS PRODVAR, "OtherExpenses" AS OTHEREXP, "NETREVENUE" AS NETREV, "CM1", "CM2" ] ); /* Planned */ SQLP_VIEW = CE_OLAP_VIEW ( "XXXXXX/XXX_XX", [ "MATNR", "KUNNR", "REGIO", "LAND1", "ORT01", "PERBL", "VKORG", "GrossRevenue", "SalesDeduction", "ProductionVariance", "OtherExpenses", "NETREVENUE", "CM1", "CM2" ] ); SQL_P = CE_PROJECTION ( @SQLP_VIEW@, [ "MATNR", "KUNNR", "REGIO", "LAND1" AS "LANDX", "ORT01", "PERBL" AS "PERIO", "VKORG", CE_CALC('1',VARCHAR(4)) AS KPLIKZ,

"GrossRevenue" AS GROSSREV, "SalesDeduction" AS SALESDEC, "ProductionVariance" AS

PRODVAR, "OtherExpenses" AS OTHEREXP, "NETREVENUE" AS NETREV, "CM1", "CM2" ] ); /* Union */ var_out = CE_UNION_ALL(@SQL_A@, @SQL_P@);

Click on the Output Node in the Scenario on the left hand side. Within the Script View details select all the Attributes > Right Click > Add as Attribute to move the fields into the outputs over to the right hand side of the screen. Proceed to do the same for the measures.

What NextWe provide SAP HANA Advanced & below Course (Online Training also)

1. SAP BW on HANA2. SAP Rapid Deploy Solutions3. SAP HANA Administration4. SAP HANA Certification Oriented5. SAP HANA End to End Project6. SAP BIBO 47. SAP BODS 4.08. SAP BW 7.3

to improve their operational efficiency, agility, and flexibility.

butt

butt

" bu