Home / 

SAP HANA Performance Optimization Techniques

SAP HANA is the fastest-growing product in SAP’s 43-year old history. It is an in-memory database engine that can analyze large data sets in real-time. Companies leverage SAP HANA to streamline and automate the data management process. This blog highlights the optimization techniques in SAP HANA and covers the following:

  • SAP HANA Table-level Optimization
  • Optimization of SAP HANA Stored Procedures / Scripted Calculation Views / Table Functions
  • Leveraging SAP HANA Using Linked Server

SAP HANA Table-level Optimization

You can improve the performance of any SAP HANA system by ensuring the following points:

  • Use the appropriate table type based on the usage. If a table is small and almost all columns of the table are used in most of the queries, then use the Row-type table. For all other cases, use a Column-type table.
  • Avoid using indexes on non-unique columns. Since every index imposes an overhead in terms of memory and performance, it’s better to create as few indexes as possible. Also, use fewer columns in an index.
  • Get better performance by assigning range, hash, and round-robin partitions to the large tables since the amount of data brought into working memory can be limited.

Optimization of SAP HANA Stored Procedures / Scripted Calculation Views / Table Functions

The SAP HANA optimization is necessary to avoid termination of the task or process when it breaches the memory threshold (set for each process). It also improves resource utilization, such as memory and CPU time required.

Based on our experience, the following are the best practices for tuning the stored procedures:

  • Use input parameters to filter the required data at the source level.
  • Replace local temporary tables with table variables. Unload the tables as soon as they are not required for further processing when using large tables.
  • Reduce the overall memory consumption by partitioning large tables, which brings lesser data into working memory. The type of partitioning and columns used for partitioning-purposes should be determined based on the ways the table is used in a majority of scenarios.
  • Instead of writing complex queries, it is recommended to split a complex query into multiple simple queries for better optimization.
  • In case of delta processing on tables that don’t have a timestamp, create a table that stores the key, timestamp (of the record coming into the HANA system), and populate this table by using triggers. While processing, extract records from the main table pertaining to a time period by joining with the delta table.
  • If a process requires a large amount of data to be processed quickly (and you have exhausted all other options), then materialize the slow-changing data and store the partial results in a separate table. Schedule this materialization process to run at appropriate intervals to refresh its data. Once you have the data in a materialized table, then you can use this table and decrease the number of tables used in the main stored procedure.
  • Using query pruning technique by using input parameters, i.e., when a user specifies the data/table that is required at that moment via input parameters, you can prune the logic that is not needed to fetch the requested data.
  • Use table functions instead of scripted calculation views.
  • In graphical calculation views, that bring a large volume of data, use query pruning to minimize data load based on users’ selection.

    SAP HANA Optimization Steps

  • In a calculation view, minimize the amount of data that will be pulled into the working memory by using input parameters and applying filters at source projection in the data flow. If the calculation view is a union of multiple calculation views, then constant mapping in the union node will result in performance improvement by using query pruning, where the data is pulled only from the desired subview (See the picture below).

    Query Pruning with Constant Mapping

While these above-mentioned optimization techniques can help you with SAP HANA optimization, here are a few things that you must avoid in the process:

  • Using Smart Data Access (SDA) – SDAs use a high-level of memory consumption
  • Using cursors
  • Data conversions - since creating columns to be converted in every query for the desired data type is expensive
  • Using calculated columns in ‘joins’
  • Aggregation nodes
  • Apply IS NULL check before filtering the data - filter the data set before applying the IS NULL check because they prove to be expensive on large data sets

Leveraging SAP HANA using a Linked Server

For a large number of stored procedures that are already present in a different environment, like SQLServer or Oracle, instead of migrating the stored procedures to SAP HANA, leverage the power of HANA by creating a linked server connection between the existing system and SAP HANA. This will reduce the overall effort and cost of development.

Benefits of using a linked server:

  • Reduced cost and effort required to migrate logic to HANA
  • Doesn’t require migration of non-SAP tables to HANA
  • Leverage the power of SAP HANA by pushing most of the logic that involves SAP ERP Central Component (ECC) tables to HANA

Here are a few examples of OpenQuery to fetch the data from HANA:

Select *
From OpenQuery(ALEX_HANA, 
'Select Distinct "MGMT_REGION", "MYCOMPANY", "CUSTOMER_NAME", "SHIP_TO","KPI_REGION"
From     "_SYS_BIC"."prd.stg2/CV_INSTALLED"
Where    "MYCOMPANY" <> ''''
Order by MGMT_REGION, MYCOMPANY, SHIP_TO').


Passing Dynamic values using an open query:

DECLARE @TableHANA VARCHAR(256)
       ,@TableSDA VARCHAR(256)
       ,@PrimaryKey VARCHAR(MAX)
       ,@GroupByPrimaryKey VARCHAR(MAX)
       ,@COLUMNS NVARCHAR(MAX)
       ,@COL_LEN INT;

SET @COLUMNS = ',[';

SELECT @COLUMNS = @COLUMNS + ISNULL((NAME + '],['),'')
  FROM SYS.columns WHERE object_id IN (SELECT OBJECT_ID 
                                       FROM   SYS.VIEWS 
                                       WHERE NAME = 'X_BSD_25' ) --<====== CHANGE TABLE NAME HERE
                   AND  NAME <> 'iKEY_ID';
PRINT @COLUMNS;
SET @COL_LEN = LEN(@COLUMNS);
SET @COLUMNS = LEFT(@COLUMNS,@COL_LEN - 2 )
PRINT @COLUMNS;

IF OBJECT_ID(N'tempdb..#TMPHANATABLE', N'U') IS NOT NULL 
DROP TABLE #TMPHANATABLE

SELECT * INTO #TMPHANATABLE
  FROM OPENQUERY(ALEX_HANA,'SELECT * FROM "ALEX_CUSTOM"."ZT_X_BSD_25"');  
SET @TableHANA = '#TMPHANATABLE'
SET @TableSDA = '[dbo].[X_BSD_25]'  
SET @PrimaryKey = ' ,[iKEY_ID] AS PrimaryKey '
--Paste Field Names and create a single  composite Key if more than one primary key fields exist in the field names below
PRINT @PrimaryKey
PRINT @Columns
PRINT @GroupByPrimaryKey
EXEC('SELECT PRIMARYKEY, COUNT(*) FROM (SELECT MIN(TableName) AS TableName, PrimaryKey' +  @Columns +      
        ' FROM  (    SELECT ''' + @TableHANA + ''' AS TableName' + @PrimaryKey + @Columns + ' FROM ' + @TableHANA + ' A ' +
     'UNION ALL SELECT ''' + @TableSDA + ''' AS TableName' + @PrimaryKey + @Columns + ' FROM ' + @TableSDA + ' B ' + 
        ') N 
        GROUP BY PrimaryKey' + @Columns +
         '  HAVING COUNT(*) = 1
          ) XYZ GROUP BY PrimaryKey HAVING COUNT(*) > 1 '     );


We hope you liked our first-hand account of SAP HANA optimization techniques in this blog. You can use these techniques to reduce the memory and CPU utilization of your business to get more work done without increasing the size of the system.

Case Study logo

Case Study

A Coffee Brand Automates Store Replenishment Brewed by Predictive Analysis

Case Study logo

Case Study

Run Real-Time Data-Driven Campaigns to Achieve 90 Percent Better Targeting

Case Study logo

Case Study

Leverage Enhanced Visualization for Real-Time Sales Insights on Streaming Data