← Back to the Blog

DBA general questions crib-sheet

List some of the tools to use to monitor the health of production databases? 

 

Wide range of tools is cuurently available for DBA to use:

from custom-tailored shell and SQL scripts. 

to Zabbix, HP Sitescope and Oracle OEM monitoring tools, 

statspack and awr reports (comes with Oracle RDBMS)  

  

Where would you look at during load test runs to increase the capacity (at the database level) of an application? 

 

When DBA is preparing for a load running event, he have to make sure to: 

  • Increase log buffers and size of redo-log files
  • increase number of sessions, processes and cursors in the database, tune memory buffers parameters
  • keep statistics current in the database, 
  • make sure, there is an adequate room to grow for affected tablespaces and datafiles

 

Every load-running or stress-testing activity is supposed to be accompanied with a real-time performance measurements including but not limited to:  

  • OS and system layer:
    • CPU/Memory utilization
    • Swap usage,
    • OS parameters (number of opened file descriptors, huge pages parameters tuning)   
    • I/O subsystems bandwidth and wait events, 
  • Database layer:
    • Wait events in the database, 
    • Cache Buffers usage
    • Redo logs usage  
    • Buffer writes 
    • Datafile's read access versus in-memory and index scanning 
  • Concurrency and Application layer
    • check for a locks and deadlocks in a database
    • number of transactions per second  
    • memory cache hit ratio   

 

Based on above measurements, and real-time alaytics, DBA will be taking course of actions for a further analysis and performance  tuning.

  

List some of techniques to identify slow or inefficient queries:

comparing and optimizing executions plans for a particular query

DBA might be choosing sqlplus SQLT, or OEM plugin to generate desired explain plan for a given SQLID, in Oracle Database. 'Explain' and 'explain analize' may be used to show execution plan in PostgreSQL   

paying an extra attention to   

  • full table scans and direct database files access 
  • check for a stale stats on corresponding tables and indexes, 
  • indexes validity

optimize SQL query design, using a following guideline 

see also some of performance tips from PosgreSQL project   

 

List some of the tradeoffs when selecting a particular type of database system (traditional RDMS, NoSQL, Key-Value stores, or any other)? 

 

RDBMS is a choice for traditional OLTP and OLAP systems:

  • + pros: transactions, joins, vertical scalability, other features, depending on dbms 
  • + big tables, rich stored procedure and SQL language    
  • - cons: complexity in RDBMS data-modeling, 
  • - limited linear scalability, 

 

NoSQL is a way to go for distributed key-value stores

  • + sharding and linear scaling,
  • + faster concurrent database reads,    
  • + schemas are defined by developers at runtime
  • - no stored procedures, 'eventually consistent' transactions,      
  • - map/reduce mechanism in place of joins
  • - poor write conncurrency handling.  

 

List some of techniques for scaling out database reads? What are the pros and cons of each technique?

 

Infrastructure level: 

  • Migrating database files to the faster Storage Subsystem (e.g. SSD based storage RAID arrays)  
  • putting redo-log files and database files on a separate storage volumes
  • maxing out vertical scaling: by adding more CPU, Memory and OS parameters tuning (huge pages)  
  • horizontal scaling: adding more nodes into the cluster (if supported)
  • load-balancing clients between the nodes     

On a DataBase Level: 

  • run analyze and gather statistics after a big loads or massive updates  
  • index partitioning 
  • tables partitioning 
  • 'pinning' database objects into memory, 
  • linear scaling (sharding)   
  • when applicable, offloading read-only and reporting queries to the standby database 

 

What are some techniques for scaling out database writes? What are the pros/cons of each technique? *

available techniques, high level overview: 

  1. shrinking large tables, archiving and partitioning
  2. tuning redo log buffers and cache buffer memory parameters
  3. maxing out vertical scaling with OS parameters tuning
  4. migrating to a faster storage subsystem
  5. organizing database nodes into a cluster (where applicable)  

 

What are some techniques for performing maintenance on database servers without interrupting service?

Let's sort out a maintenance kinds: 

  1. backups: using full and incremental hot backup strategy for a heavy loaded OLTP systems
  2. cloning: use a database hot-backup in a conjunction with storage-level snapshotting and deduplication  
  3. patching: where applicable, use node clustering and session-level load-balancing for a hot-failover between the nodes that are being patched/upgraded.  

 

How to perform time-consuming schema changes on a database without downtime?

Answer Depends on a chosen database Product, 

However, our goal is to avoid lock on a large tables for a time of schema modification operations 

like 'ALTER TABLE' command : 

high level algorithm will be: 

  • recreate a tables with modified structure 
  • copy data from existing table without disconnecting users 
  • when copy is over - switch session over to the new schema   

tools and instrument may vary, depending on a Product 

  • Oracle: using OEM and Schema Baseline for schema altering
  • PostgreSQL: third party tools, e.g.: using slony for a database replication and slonik for schema patching.
  • MySQL: third party tools examples: pt-online-schema-change, facebook Online Schema Change

 

List some of techniques for ensuring high-availability of database services for an application

Various HA clustering products available, for a master-slave and master-master replication,

given examples are not limited to:   

  • Oracle RAC clusterware 
  • MariaDB Galera Cluster,
  • MongoDB: Configuring ReplicaSets and HA clustering   
  • Postgre: third party products   

 

Provide an example of a proper backup protocol for a production database system. Assuming that we might have an ability to perform a point-in-time recovery to within 4 hours of a crash. 

I NO incremental backups implemented:

full hot backup scenario: 

  1. make sure database is running in archive log mode (WAL in Postgre terminology)  
  2. put database into hot backup mode 
  3. copy database files over to backup location 
  4. disable hot backup mode for a database
  5. backup needed archive log files   

example backup schedule:  

  • * nightly hot backup 
  • * continuously backup all archieve log files, since the last full hot backup     

II incremental backup strategy:

  • weekly full backup 
  • running incremental dela backups every 4 hours   
  • continuously backup all archieve log files since the last incremental backup taken (see above)

Be the first to reply