Thursday, November 4, 2010

checklists are handy sometimes

As a DBA one of the common task is setting up new database(s) for a new application. So when the application is built from scratch you will be involved in the full life cycle of the project. Do you have a handy checklist to validate the configurations for the new databases. What if you dont have one?

Sometimes it is scary because a wrong configuration could result in a huge business loss and thereby putting your job at risk. An example scenario would be not checking on your database backup strategy. Lets say you are in a hurry and did not update trackmod or setup your database for online backups. Hmm You are in trouble now. So it is always good to have a complete checklist of items to be applied on a brand new application. Below is task list that I compiled and it could vary depending on your environment. You may take a print of this image for your reference when you work on building a new application.

First and important category is the kernel settings. Shmmax and shmall are two important parameters that influence the memory allocations at instance and database level. If they are not properly set you might encounter the infamous "Shared memory segments can not be allocated". Following links come handy for setting kernel parameters


Modifying kernel parameters (Linux)
http://publib.boulder.ibm.com/infocenter/db2luw/v9r5/index.jsp?topic=/com.ibm.db2.luw.qb.server.doc/doc/t0008238.html

Kernel parameter requirements ( Linux )
http://publib.boulder.ibm.com/infocenter/db2luw/v9r5/index.jsp?topic=/com.ibm.db2.luw.qb.server.doc/doc/c0057140.html

OS user limit requirements (Linux and UNIX)
http://publib.boulder.ibm.com/infocenter/db2luw/v9r5/index.jsp?topic=/com.ibm.db2.luw.qb.server.doc/doc/r0052441.html

maxfilop - Maximum database files open per application configuration parameter
http://publib.boulder.ibm.com/infocenter/db2luw/v9r5/index.jsp?topic=/com.ibm.db2.luw.admin.config.doc/doc/r0000280.html


Backup & Recovery settings are vital. If you forget to setup the database for archival logging or if you ignore trackmod ,you would run into issues for sure. Also decide upon the backup strategy well in advance.Do not forget to take offline backup once the database is setup for archival logging. Testing the full cycle of taking backups and restoring it back to fulfill several failure scenarios is essential step for critical applications. After all a DBA`s primary responsibility is to secure the critical data


Initial communication settings like svcename and DB2COMM are trivial but essential for application connections. Sometimes although you have svcename set and DB2COMM set , you might still notice communication issues. Look for db2tcpcm and ipccm in the 'db2pd -edus' output. If you dont see them there do a db2_kill and try again


LOCKTIMEOUT change from -1 : Setting lock timeout to -1 might result in non-terminating lock waits.

STMM configuration : STMM has to turned off/on depending on your workloads and needs

automatic maintainance : Automatic backups , runstats have to be configured

memory settings : Do you want your instance_memory to be automatic or set to a manual value. Things like these have to be decided upfront

diagpath change : diaglogs might grow huge with time and one should not ignore the storage requirements for diaglog and path for diaglog.


Capacity planning is crucial in the initial stages. Do you have adequate memory,cpu and storage resources? Is your applications scalable?


Other house keeping tasks like scheduling backups,runstats,reorgs are important. My list here is just an indicative list and not comprehensive enough to cover all the scenarios. My intention was to pen about the necessity of maintaining checklists as a DBA. I know we all hate documentation but believe me it saves our job sometimes








Tuesday, February 16, 2010

Shared memory issue::shared memory segments can not be allocated


SQL1084C Shared memory segments cannot be allocated

Have you seen above message anytime? It is annoying when you see this message.Basically first thing you can try is changing instance_memory and database_memory

Important kernel parameters to be considered here are shmmax and shmall.Shmmax is the maximum size of a shared memory segment and shmall is the maximum allocatable shared memory(sum of shared mem segments should be equal or less than this).Recommended value for shmmax is setting it equal to the RAM and for shmall it is 90% of physical memory

ipcs -l is the command to check the values of these kernel parameters


------ Shared Memory Limits --------
max number of segments = 4096 // SHMMNI
max seg size (kbytes) = 32768 // SHMMAX
max total shared memory (kbytes) = 8388608 // SHMALL
min seg size (bytes) = 1


Sometimes you need to increase the value of shmmax to accomodate bigger segment or even increase shmall value to help more overall shared memory.

How do we map shared memory on the box with shared memory parameters on the database and instance??

We use db2pd -memsets,db2pd -mempools to look at the shared memory segments allocated and we try to map it to the original values on the box.My next blog post will reveal the mapping mechanism and the usage of db2pd -memsets,db2pd -mempools commands

Thursday, October 22, 2009

Db2 purescale and Oracle RAC

IBM announced a new technology called Purescale recently.It is basically a technology that excels in horizontal scalability.

So what is horizontal scalability??How is it different from vertical scalability??

Horizontal scalability is the ability to add capacity by adding nodes to the cluster whereas vertical stability is the ability to increase capacity by adding extra resources to the existing entity/server.

Purescale is aimed at achieving 3 important goals:
1)Application transparency: No coding changes are required when you add extra nodes

2)Unlimited capacity:This is achieved by adding as many nodes as needed.But there are limitations on the platforms to begin with

3)Data availability:This system aims at zero downtime and is completely available if one or many nodes fail

Is Purescale a replacement for HADR??

No. Purescale is not a replacement and it has only one shared database copy.So, if HADR can be applied for a purescale system it is the optimal availability scenario.IBM thinks that they can do this in the coming days


Oracle RAC Vs DB2 Purescale:Which is better??

When you use Oracle RAC lot of application changes have to be made when we add new nodes.But with Purescale the application is completely transparent to the changes in nodes.Also Purescale has centralized resource management system that manages the lock and other resources.

Tuesday, October 13, 2009

Basic db2 federation setup

Db2 federation can be used to retrieve information from non-db2 sources like oracle,sql server.It also can be used with db2 datasources.Setting up federation can be confusing at times.Below is the basic procedure that can be used to setup federated system:

1)Enable federation: Federation can be enabled by running the following command

update dbm cfg using federated yes immediate;

2)Create wrapper:Below is the command you would issue for a db2 datasource.Wrappers differs for each RDBMS.Please refer to IBM documentation to get more information on wrappers

create wrapper DRDA;

3)Catalog the datasource information: Datasource should be cataloged properly and the federated server uses the access method depending on the datasource type.

4)Create a server definition.Refer to the below example for db2/udb datasource

db2 "create server SAMPLEHOST

type DB2/UDB

version 9.5 wrapper drda

authorization 'user1'

password '*****'

options(node 'SAMPLENODE', dbname 'SAMPLTST')"

5)Create user mapping:

db2 "CREATE USER MAPPING FOR federuser server SAMPLEHOST options(remote_authid 'user1',REMOTE_PASSWORD '*****')"

6)Create nickname:

CREATE NICKNAME EMP FOR SAMPLEHOST.SAMPLTST.EMP;

If you follow all the above steps, basic federation setup is done.Nicknames can be on tables or views that reside in the datasource.Once the federation is setup, nicknames can be used to refer to the actual datasource objects.If some part of the code can not be processed by the data source, it is not passed to the data source.The datasource in this case will be using alternate functionality that is close or the data set will be sent to the federated server for additional processing

Monday, October 5, 2009

HADR performance - part 1

HADR stands for high availability disaster recovery .This solution is used for disaster recovery purpose on db2 udb databases.HADR should be configured properly inorder to have optimal performance:

1)HADR synchronization mode:HADR can be run in 3 different modes SYNC,NEARSYNC,ASYNC.SYNC mode gives the best protection to data.In this mode primary has to wait until the changes are committed and written on the standby.Primary waits for the acknowledgement from the standby server.In NEARSYNC mode, standby sends acknowledgement as soon as the logs are in memory of standby server.And in ASYNC mode , primary does not wait for any kind of acknowledgement from the standby.Proper synchronization mode has to be chosen for optimal performance

2)DB2_HADR_BUF_SIZE:This registry variable controls the size of the receive buffer .Receive buffer is the area of memory where the logs are received before they are replayed.You can use the db2pd -db dbname -hadr on standby to monitor the usage of receive buffer.If you see it reaching 100 during the workload, you need to increase the value of DB2_HADR_BUF_SIZE

3)DB2_HADR_SOSNDBUF and DB2_HADR_SORCVBUF:There are the socket send buffer size and socket receive buffer size respectively.If the size for these parameters is too small then the full bandwidth can not be utilized.Generally increasing this to a bigger value would not impact performance negatively.


4)Logfilsz:Size of the logfile plays an important role in the performance,Generally this size should be few hundred MB.

Wednesday, September 23, 2009

Alter table not logged initially

Here is a little code that descripts the usage of "alter table not logged initially"


db2 "connect to SAMPLE";

DB2CMD1="alter table SAMPLE.employee activate not logged initially"

DB2CMD2="INSERT INTO SAMPLE.employee values(,,,,,,,,,,)"
DB2CMD3="commit"

db2 +c -tv "${DB2CMD1}"; db2 "${DB2CMD2}"; db2 "${DB2CMD3}";

Monday, June 1, 2009

Checking the instance peaks

If you want to check the peak usage of memory on any instance use the following


db2 "select * from table (sysproc.admin_get_dbp_mem_usage(-1) ) as t" more
DBPARTITIONNUM MAX_PARTITION_MEM CURRENT_PARTITION_MEM PEAK_PARTITION_MEM-------------- -------------------- --------------------- -------------------- 0 1597186048 447676416 450101248


DBPARTITIONNUM::The database partition number from which memory usage statistics is retrieved.
MAX_PARTITION_MEM::The maximum amount of instance memory (in bytes) allowed to be consumed in the database partition.
CURRENT_PARTITION_MEM::The amount of instance memory (in bytes) currently consumed in the database partition.
PEAK_PARTITION_MEM::The peak or high watermark consumption of instance memory (in bytes) in the database