By Completely understanding Core performance issues of a database, my optimization techniques results in maximizing the speed and efficiency of instances and it’s queries for Oracle and MySQL database.
With Deep understanding of Capacity Management Challenges, accuracy of space usages estimation as well as prediction of object’s growth is very high which Improves visibility of management on resources’s capacity planing for making correct business decision.
By defining guidelines for organizations and database administrators to help devise a backup strategy suitable for their environment. Focusing beyond “valid copy of backup” while devising backup strategy.
HA & DR Management
Expecting the unexpected is key to my HA & DR implementations which is ensuring high level of availibility for your data and help business smooth functioning without worrying about possibilites of data unavailablity.
SENIOR DBA ARCHITECTPerformance Tuning | Database Architect | Core DBA
My core expertise area covers Performance Tuning, Capacity Planning & Backup Strategy on Oracle and MySQL databases.
My experience also extends to High Availability & Disaster management, PL/SQL Application development, Data migrations and conversions, Database Migration including Time series & NoSQL databases.
With deeper understanding of database, I am Successfully able to convert database related customer escalations into their appreciations in quickest time on many occasions. I have worked extensively on statistics & histogram, SQL Plan stability, SQL trace files, Profilers, database instance optimizations, query optimizations, modelling database, managing tablespaces & storage. Also, successfully orchestrated migration of multi-terabyte database into Production.
I am currently associated with Nagarro Software Pvt Ltd
Nagarro Software Pvt Ltd, Gurgaon
Polaris Software Lab, Gurgaon
Vayam Technology, NOIDA
Espire Infolabs Pvt Ltd, Gurgaon
Krishna Maruti Ltd, Gurgaon
Rapid growth of sensor-based, IoTs, social media, financial data like stock market activities and many other information streaming platforms created opportunity to design a whole new database which can capture streaming information with highlighting the importance of time into it. This is so because even traditional RDBMS was not able to efficiently handle complex business Read more about Time Series Database : Evolvement[…]
With its inception in 2006, Amazon AWS has definitely gone a long way. Engineers from Amazon worked really-really well which has not only completely changed the horizon of cloud but also emerges as one of the boon for any business to adopt. Although, there are many other vendors available in the cloud market and they Read more about Cloud database war: Advantage shifting to Red?[…]
Storage engine is one of the key component of any database. It is, in fact, a software module which is used by database management system to perform all storage related operations e.g. create information, read information and update any information. The term storage means both disk storage and memory storage. Choosing right storage engine is Read more about WiredTiger: A game changer for MongoDB[…]
There are times when you require to drop your existing database for more than one reason. Dropping a database is not a tough job at all, if you are very sure that which database you should drop. Problem Statement: How to drop database. Status or mode of database in which it can be dropped Step Read more about Drop a database[…]
Problem Statement: Restore entire database using Data Pump. Restore table(s) Restore tablespace(s) Restore schema(s) Restore using Transportable tablespaces (TTS) Restore from multiple small sizes of dump files Restore in parallel mode Approach: There are single shot solution to all the above problem statement and it is IMPDP in Data Pump. It is one of various Read more about Data Pump: impdp[…]
In real production world “Prediction” of data growth is an important aspects of DBA life because this will allow business not only to foresee the real position in terms of existing hardware but also enable to plan the future expenses which should be spent on hardware(s) & storage. To generate data growth report, Oracle provides Read more about Trend of data growth in Oracle Database[…]
Problem Statement: Backup entire database using Data Pump. Backup table(s) Backup tablespace(s) Backup schema(s) Backup using Transportable tablespaces (TTS) Generate multiple small sizes of dump files Backup in parallel mode Approach: There are single shot solution to all the above problem statement and it is Data Pump. It is one of various backup tools provided Read more about Data Pump: expdp & impdp[…]
Among various techniques of backing up your database Oracle provides data pump as one of tool which they are constantly improving and making this tool sharper release by release. Since its first launch with 10g version, it has improved a lot not only in terms of its new features but in terms of performance as Read more about Data Pump: a tool to backup and restore database[…]
MongoDB is one of the document oriented open source database developed in c++, first come into shape in 2007 when in order to overcome the shortfall of existing database while working for an advertising company “DoubleClick” development team has decided to go further rather than struggling with database. The team of this advertising company was Read more about MongoDB Enterprise Edition Installation – Ubuntu[…]
MongoDB is one of the document oriented open source database developed in c++, first come into shape in 2007 when in order to overcome the shortfall of existing database while working for an advertising company “DoubleClick” development team has decided to go further rather than struggling with database. The team of this advertising company was Read more about MongoDB – Uninstallation[…]
MongoDB is one of the document oriented open source database developed in c++, first come into shape in 2007 when in order to overcome the shortfall of existing database while working for an advertising company “DoubleClick” development team has decided to go further rather than struggling with database. The team of this advertising company was Read more about MongoDB Installation – Ubuntu[…]
Like any other type of database NoSQL database also provides a mechanism to store and retrieve which is modeled in such a way so that it should be different than Relational database. In short, a NoSQL database does not store data in relational format or we can say tabular format. That is why is referred Read more about Mongo DB – An Introduction[…]
Problem statement: How to move data files from one location to another on same storage. How to move data files from one storage to another. How to rename data files to make data file name standardized Environment / Scenario: You have a database where you have to move your data files from old slower storage Read more about How to re-organize your data files of a tablespace[…]
Problem statement: How to migrate huge data from One DB to another DB. Multi-Terabyte data loaded on one database should be copied to another database. Environment: You have multi-terabyte Database Your database is growing on daily basis, based on data feeds. Number of Indexes on these tables are very high, and thus, size of indexes Read more about How to copy Multi terabyte data to another Database[…]
Problem Statement: Move DB (with Oracle Binaries) on New Storage Create new DEV/UAT from Production. Approach: While creating new UAT or DEV from production and make this version of oracle to the same patch set level as of production, there are more than one approach you can follow. For example, either you can install from Read more about Move your DB with Oracle Binaries[…]
Problem Statement: Load millions of rows from flat files (csv) to database Load one table from another huge table. Speeding Insert statements Approach: For creating record in database using insert there are two methods. One is conventional and the other is direct path. If we look at the performance aspects of both the approach, latter Read more about Speed-up your Inserts[…]
With discussion, in my last blog, about “How Hadoop manages Fault Tolerance” within its cluster while processing data, it is now time to discuss the algorithm which MapReduce used to process these data. It is Name Node (NN) where a user submits his request to process data and submits his data files. As soon as NN receives data Read more about MapReduce Unwinding. . . . . .Algorithm[…]
Before we see the intermediate data produced by the mapper, it would be quite interesting to see the fault tolerant aspects of Hadoop with respect to MapReduce processing. Once Name node (NN) received data files which has to be processed, it splits data files to assign it to Data Node (DN). This assignment would be Read more about MapReduce Unwinding. . . . . . Fault Tolerance[…]
The philosophy of Map Reduce workings is straight forward and can be summarized in 6 steps. Whatever data we provide as input to Hadoop, it first splits these data into smaller no of pieces. Typically, the size of data splitted is limited to 64MB. If a file of 1 TB is arrived to process on data node, Read more about MapReduce Unwinding. . . . . Philosophy[…]
MapReduce is a programing paradigm which provide an interface for developers to map end user requirements (any type of analysis on data) to code. This framework is one of the core component of Hadoop. The way it provides fault tolerant and massive scalability across hundreds or thousands of servers in a cluster for processing of Read more about MapReduce : Internals[…]
Inspired from Google File System which was developed using C++ during 2003 by Google to enhance its search engine, Hadoop Distributed File System (HDFS), a Java based file system, becomes the core components of Hadoop. With its fault tolerant and self healing features, HDFS enables Hadoop to harness the true capability of distributed processing techniques by turning Read more about HDFS Architecture Explained[…]
Because of the limitation of currently available Enterprise data warehousing tools, Organizations were not able to consolidate their data at one place to maintain faster data processing. Traditional ETL tools may take hours, days and sometimes even weeks. Performances of these tools are limited by two Hardware limitations. The vertical hardware scalability: Hardware can be scaled Read more about MAGIC OF HADOOP[…]
At the outset of twenty-first century, somewhere 1999-2000, due to increasing popularity of XML and JAVA, internet was evolving faster than ever. As the world wide web grew at dizzying pace, though current search engine technologies were working fine, a better open source search engine was the need of the hour to cater the future Read more about Journey of Hadoop[…]
Innovations in technologies made the resources cheaper than earlier. This enables organizations to store more data at lower cost and thus increasing the size of data. Gradually it becomes bigger and now it moves from Megabytes (MB) to Petabytes (1e+9 MB). This huge increase in data requires some different kind of processing and ways of Read more about Big Data: An Introduction[…]