navigation

SAP HANA ARCHITECTURE

We have divided HANA architecture into 3 parts as below:
We will understand one by one.
PART 1:
·         Understanding the SAP HANA architecture
·         Explaining IMCE and its components

PART 2:
·         Storing data – row storage
·         Storing data – column storage

PART 3:
·         Understanding the persistence Layer
·         Understanding Backup and recovery

PART 3:
Understanding the persistence Layer
·         SAP HANA’s persistence layer manages logging of all the transactions in order to provide standard backup and restore functions.
·         Both the row stores and column stores interact with the persistence layer. It offers regular savepoints, and also logging of all database transaction since the last savepoint.
·         The persistence layer is responsible for the durability and atomicity of transactions.
·         The persistence layer manages both data and log volumes on the disk, and also provides interfaces to read and write data that is leveraged by all the storage engines. 
·         This layer is built based on the persistency layer of MaxDB, SAP’s traditional relational database. 
·         The persistency layer guarantees that the database is restored to the most recent committed state after a restart, and these transactions are either completely executed or completely rolled back.
·         To accomplish this efficiently, it uses a blend of write-ahead logs, shadow paging, and savepoints.
·         To enable scalability in terms of data volumes and the number of application requests, the SAP HANA database supports scale-up and scale-out.
·         Keeping data in the main memory brings up the question of what will happen in the case of a loss of power.
In database technology, atomicity, consistency, isolation, and durability (ACID) are a set of requirements that guarantees that the database transactions are processed reliably:
·         A transaction has to be at This means the transaction should be either executed completely or fail completely. The database state should be unchanged, and the entire transaction has to fail if a part of it fails.
·         Consistency of a database must be unspoiled by the transactions that it performs.
·         Isolation ensures that all transactions are independent.
·         Durability means that there is no change in the state of a transaction, that is, a transaction will remain committed after it has been committed.
While the first three requirements are not affected by the in-memory database concepts, durability is the lone requirement that cannot be met by storing data in the main memory.
·         The main memory is a volatile storage; its content will be cleared when power is switched of To make data persistent, non-volatile storage (such as hard drives, SSD, or flash devices) have to be used.
·         The storage used by a database to store data is divided into pages.
·         When data changes occur due to transactions, the changed pages are marked and written to the non-volatile storage at regular intervals.
·         In addition to this, all changes made by the transactions are captured by database All the committed transactions generate a log entry, and these are written to non-volatile storage.
·         This confirms that all transactions are stored permanently. The following diagram illustrates this using the example of SAP HANA.
·         All the changed pages are saved in the form of savepoints, which are asynchronously written to persistent storage at regular intervals (by default, every five minutes).
·         The log is written synchronously, that is, transaction does not return before the corresponding log entry that has been written to the persistent storage.


·         After a power failure, the database can be restarted like a disk-based Database pages from the savepoints are restored, and then the database logs are applied (rolled forward) to restore the changes that were not captured in the savepoints.
·         This ensures that the database can be restored in the memory to exactly the same state as it was before the power failure.
·         The SAP in-memory database holds the bulk of its data in the memory for maximum performance.
·         It still depends on persistent storage to provide a fallback in case of f The log captures all changes done by the database transactions (redo logs).
·         Data and undo log information (parts of data) are automatically saved to the disk at regular savepoints.
·         The log is also saved to the disk continuously and synchronously after each commit of a database transaction (waiting for the end of a disk write operation).
The database can be restarted after a power failure, just like a disk-based database:
·         The system is normally restarted (lazy reloading of tables to keep the restart time short)
·         The system returns to its last consistent state (by replaying the redo log since the last savepoint)
Understanding Backup and Recovery:
·         In the SAP HANA database, during normal operation, data is automatically saved to the disk at regular savepoints.
·         Furthermore, the log captures all the data After each committed database transaction, the log is saved from the memory to the disk.
·         When there is a power failure, the database can be restarted like any disk-based database, and it returns to its last consistent state by replaying the log since the last savepoint.
The backups are required for the following reasons:
·         To protect against disk failures.
·         To make it possible to reset the database to an earlier point in time.
Backups are carried out while the database is running and users can continue to work normally. The impact on system performance is negligible.
·         SAP HANA is an in-memory database or a database that stores its database tables in the main memory RAM.
·         RAM is the fastest possible data storage media available as of today; however, it is v During power loss, the data bits on the chip are erased or lost.
·         In order to avoid data loss, SAP HANA encompasses regular savepoints using two persistent storage volumes, that is, database logging or redo logging.
·         With the combination of both redo logging and in-memory data savepoints, the system is fully capable of recovering from a sudden power failure.
The administration console of the SAP HANA studio provides a one-stop support environment for different activities such as system monitoring, backup and recovery, and user provisioning.
·         The entire payload data from all the server nodes of the SAP HANA database instance are backed up as soon as the data area is backed up.
·         This principle applies for both single-host and multihost environments.
·         During a log back up, the payload of the log segments is copied from the log area to the service-specific log backup files.
·         Backup and recovery always applies to the entire database.
·         It is not possible to back up or recover individual database While performing a backup of the SAP HANA system, all the objects such as database tables, information models (that is, views and undo logs), information views, and metadata are all saved to a configurable persistent disk location.
·         In the summary, all of the data and code that are stored in SAP HANA will be taken as a back up which is available at the specified path.
By default, the SAP HANA system creates log file backup for every 15 minutes (900 seconds), or when the standard log segments become full.
·         In case of scenarios of data center failures due to accidents such as fire, power outages, natural calamities such as earthquakes, or due to hardware failures such as the failure of any node, SAP HANA supports a hot-standby concept using synchronous mirroring with a redundant data center concept.
·         This includes redundant SAP HANA databases also.


·         In addition, the cold-standby concept uses a standby system within one SAP HANA landscape, where the failover is triggered automatically.
·         SAP HANA is an ACID-compliant database supporting atomicity, consistency, isolation, and durability of transactions.
·         In addition to recovery for Online Analytical Processing (OLAP), SAP HANA also provides transactional recovery for Online Transactional Processing (OLTP) through the administrative console in the SAP HANA studio.
The currently supported processes are given as follows:
·         Recovery to last data backup
·         Recovery to last and older (previous) data backup
·         Recovery to last state before crash
·         Point-in-time recovery
User provisioning is supported with role-based security, authentication, and analysis authorization using analytic privileges, which enables security for analytical objects based on a set of attribute values.
·         The administration console in SAP HANA Studio enables the version control mechanism for models of SAP HANA and SAP Data Services.
·         SAP HANA can run in a single production landscape if the initial use case scenario is not business critical and the data load performance for the initial load is acceptable to reload the data.
·         However, it is always recommended to align the SLT and SAP Data Services environment with the existing source system landscapes.
·         When it comes to enterprise-grade business supporting mode of environment, SAP HANA needs to run in the standard landscape, that is, SAP development, quality assurance and staging, and production environments.

·         For scale-up scalability, all algorithms and data structures are designed to work on large multi-core architectures, especially focusing on the cache-aware data structures and code fragments.
·         For scale-out scalability, the SAP HANA database is designed to run on a cluster of individual machines.
·         This allows the distribution of data and query processing across multiple nodes.
·         The scalability features of the SAP HANA database are heavily based on the proven technology of the SAP BWA product.
Thank you for reading and hope this information is helpful. Please do share with your friends if you feel the information is useful.



No comments:

Post a Comment