Sharding Database Infrastructure Horizontally - Sharding in Oracle 12.2 – Part 2

    By: Nassyam Basha on Jan 29, 2018

    Sharding in oracle 12.2.png

     

    By Nassyam Basha, Data Intensity | Simon Pane, Editor 

    In first article in this series, we reviewed in depth the architecture of Oracle Sharding and the various technical components. This article purely focuses on prerequisites and the initial deployment steps.

    Next, we are going to configure the catalog database. After that we will deploy 2 shards in the primary site and 2 shards in the standby site using the Active Data Guard option.

    Oracle Sharding Plan
    This article will focus only the deployment. The rest of the service configuration, monitoring, testing of data distribution in the sharded table, etc. will be covered in the final article in this series.

    With this configuration various components and binaries are involved. It is very important to maintain consistency across all shards, starting from the installation of binaries through the complete configuration. We are hosting the sharded catalog database and GSM on one server and the remaining shards are on different individual hosts.

    High-level configuration plan:

    - Hosts prerequisites

    - Prepare directory structure for shards

    - Configure catalog database

    - Configure remote scheduler

    - Configure shard catalog and registration of GSM with the catalog

    - Add shardgroups and shards

    - Deployment

     

    Notes:

    1. Only one shard can be hosted on each server

    2. We should have same number of shards in both the primary and standby sites

    3. Make record of and secure the passwords that we will use for the various accounts (gsmadmin, gsmcatuser, sys).

     

    Sharding One.png

     

     

    Prerequisites of Sharding Deployment

    1. Disable Firewall/iptables on all hosts. [Example output will be provided on one shard node only]

     

    [root@ora-ccrep ~]# service iptables stop

    [root@ora-ccrep ~]# chkconfig iptables off

    [root@ora-ccrep ~]#

     

    [root@ORA-C1 ~]# service iptables stop

    [root@ORA-C1 ~]# chkconfig iptables off

     

    2. Validate the RDBMS home binaries ensuring that we have similar file and group permissions as shown below. Crosscheck on all other hosts.

    [root@ora-ccrep ~]# ls -ltr /u01/app/oracle/product/12.2.0/dbhome_1/bin/ext*

    -rw------- 1 oracle oinstall 0 Sep 18 2014 /u01/app/oracle/product/12.2.0/dbhome_1/bin/extprocO

    -rw------- 1 oracle oinstall 2251869 Jan 26 13:28 /u01/app/oracle/product/12.2.0/dbhome_1/bin/extjoboO

    -rw------- 1 oracle oinstall 2251869 Jan 26 13:28 /u01/app/oracle/product/12.2.0/dbhome_1/bin/extjobO

    -rwxr-x--- 1 oracle oinstall 361 Jul 20 19:51 /u01/app/oracle/product/12.2.0/dbhome_1/bin/extusrupgrade

    -rwx------ 1 oracle oinstall 2241831 Jul 20 19:52 /u01/app/oracle/product/12.2.0/dbhome_1/bin/extjobo

    -rwsr-x--- 1 root oinstall 2241831 Jul 20 19:52 /u01/app/oracle/product/12.2.0/dbhome_1/bin/extjob

    -rwxr-x--x 1 oracle oinstall 199071 Jul 20 19:53 /u01/app/oracle/product/12.2.0/dbhome_1/bin/extproc

    [root@ora-ccrep ~]# ls -ltr /u01/app/oracle/product/12.2.0/dbhome_1/rdbms/admin/externaljob.ora

    -rw-r----- 1 root oinstall 1534 Dec 21 2005 /u01/app/oracle/product/12.2.0/dbhome_1/rdbms/admin/externaljob.ora

    [root@ora-ccrep ~]#

     

    $ORACLE_HOME/bin/extjob

    chown root, chmod 4750

    - Check the permissions of $ORACLE_HOME/rdbms/admin/externaljob.ora

    chown root, chmod 640

    - check the permissions of $ORACLE_HOME/bin/jssu

    chown root, chmod 4750

     

    3. Add all hosts’ information in all catalog host servers and the shards servers. Because all the shard servers work as logical cluster RAC), we have to ensure that all the hosts are reachable from each of the hosts.

    # cat /etc/hosts

    127.0.0.1 localhost localhost.localdomain

    192.168.0.110 ORA-C4.localdomain ORA-C4

    192.168.0.33 ora-ccrep.localdomain ora-ccrep

    192.168.0.90 ORA-C2.localdomain ORA-C2

    192.168.0.80 ORA-C1.localdomain ORA-C1

    192.168.0.100 ORA-C3.localdomain ORA-C3

     

    4. Create the directory structure (if not using ASM) for data files and configure the Fast Recovery Area on all the shards ORA-C1/2/3/4.

    [oracle@ORA-C1 dbhome_1]$ cd /u01/app/oracle/oradata/

    [oracle@ORA-C1 oradata]$ pwd

    /u01/app/oracle/oradata

    [oracle@ORA-C1 oradata]$ cd ../fast_recovery_area/

    [oracle@ORA-C1 fast_recovery_area]$ pwd

    /u01/app/oracle/fast_recovery_area

    [oracle@ORA-C1 fast_recovery_area]$

     

    5. In the initial configuration, we created the sharded catalog database using the DBCA with OMF. In this step we will perform few required changes to initialization parameters and also create users and grant the necessary permissions. This step is applicable only on ORA-CCREP/SHCAT.

    SQL> alter system set open_links=16 scope=spfile;

    System altered.

    SQL> alter system set open_links_per_instance=16 scope=spfile;

    System altered.

    SQL> alter user gsmcatuser identified by oracle;

    User altered.

    SQL> alter user gsmcatuser account unlock;

    User altered.

    SQL> create user gsmadmin identified by oracle;

    User created.

    SQL> grant connect, create session, gsmadmin_role to gsmadmin;

    Grant succeeded.

    SQL> grant inherit privileges on user sys to gsmadmin_internal;

    Grant succeeded.

    SQL> alter system set events 'immediate trace name GWM_TRACE level 7';

    System altered.

    SQL> alter system set event='10798 trace name context forever, level 7' scope=spfile;

    System altered.

    SQL>

     

    6. Restart the SHCAT database to allow the static parameter changes to come into effect.

    SQL> shut immediate

    Database closed.

    Database dismounted.

    ORACLE instance shut down.

    SQL> startup

    ORACLE instance started.

     

    Total System Global Area 629145600 bytes

    Fixed Size 8623832 bytes

    Variable Size 297797928 bytes

    Database Buffers 318767104 bytes

    Redo Buffers 3956736 bytes

    Database mounted.

    Database opened.

    SQL>

     

    7. Configure the Oracle Scheduler Agent to run remote jobs on ORA-CCREP/SHCAT. The Oracle Scheduler Agent is a program that allows the scheduling and running of remote external jobs. The Oracle Scheduler Agent comes with the installation of Oracle database 12.2 software, or we can install dedicated Oracle Scheduler Agent software as client software on a remote server with no RDBMS home. In this step we will also configure the Oracle Scheduler in the catalog database. The command sethttpport enables http connections for all the configured agents.

     

    SQL> execute dbms_xdb.sethttpport(8080);

    PL/SQL procedure successfully completed 

    SQL> commit;

    Commit complete.

    SQL> @?/rdbms/admin/prvtrsch.plb

    PL/SQL procedure successfully completed.

     

    Session altered. 

    PL/SQL procedure successfully completed.

    PL/SQL procedure successfully completed.

     

    Session altered.

    Session altered.

    Package created.

    No errors.

    . . .

    . .

    .

    Commit complete.
    Session altered.
    PL/SQL procedure successfully completed.
    SQL>

     

    8. Configure the agent password (again in the SHCAT database) for remote job authentication.

    SQL> exec DBMS_SCHEDULER.SET_AGENT_REGISTRATION_PASS('oracle123');

    PL/SQL procedure successfully completed.

    SQL>

     

    9. The next command runs on shard hosts to register the Scheduler Agent. This will prompt for the agent registration password set in the previous step. After that we need to start the agent. This step needs to be performed on all shard hosts ORA-C1/2/3/4.

    [oracle@ORA-C1 ~]$ export ORACLE_HOME=/u01/app/oracle/product/12.2.0/dbhome_1

    [oracle@ORA-C1 ~]$ export ORACLE_BASE=/u01/app/oracle

    [oracle@ORA-C1 ~]$ export PATH=$ORACLE_HOME/bin:$PATH

    [oracle@ORA-C1 ~]$ schagent -registerdatabase ora-ccrep 8080

    Agent Registration Password ? ********** *

    Oracle Scheduler Agent Registration for 12.2.0.1.2 Agent

    Agent Registration Successful!

    [oracle@ORA-C1 ~]$ schagent -start

    Scheduler agent started using port 32087

    [oracle@ORA-C1 ~]$ schagent -status

    Agent running with PID 9062

    Agent_version:12.2.0.1.2

    Running_time:00:00:07

    Total_jobs_run:0

    Running_jobs:0

    Platform:Linux

    ORACLE_HOME:/u01/app/oracle/product/12.2.0/dbhome_1

    ORACLE_BASE:/u01/app/oracle

    Port:32087

    Host:ORA-C1.localdomain

    [oracle@ORA-C1 ~]$

     

    GDS Configuration

    At this point, we have performed most of the required prerequisites. Now we can configure the Global Data Services using the Global service manager utility. As described in first part, we have already installed GSM on the ORA-CCREP host. Managing GDS is only possible through the GDSCTL utility which is accessible after exporting the GSM Home environment. Using this software, we will create the configuration. 

    1. Setup connectivity from GSM to the Catalog database: During this step, run from GSM home, we will connect to the sharded catalog database (SDB) called SHCAT. This can be done using either EZconnect or a standard TNS service.

    a. EZconnect:

    GDSCTL>connect gsmadm/oracle@192.168.0.33:1525/SHCAT

     

    b. TNS Service :

    -bash-4.2$ cd /home/app/oracle/product/12.2.0/gsmhome_1/network/admin/

    -bash-4.2$ cat tnsnames.ora

    SHCAT =

    (DESCRIPTION =

    (ADDRESS = (PROTOCOL = TCP)(HOST = ora-ccrep.localdomain)(PORT = 1525))

    (CONNECT_DATA =

    (SERVER = DEDICATED)

    (SERVICE_NAME = shcat)

    )

    )

    -bash-4.2$

     

    2. Create the Catalog Database/Sharded Catalog Database: In this process we also create three regions:

    a. Europe – where GSM is hosted

    b. Canada – Primary 2 shards

    c. India – Standby 2 shards

     

    -bash-4.2$ . oraenv

    ORACLE_SID = [gsm] ? gsm

    The Oracle base remains unchanged with value /home/app/oracle

    -bash-4.2$ gdsctl

    GDSCTL: Version 12.2.0.1.0 - Production on Sat Jul 22 23:43:51 IST 2017


    Copyright (c) 2011, 2016, Oracle. All rights reserved.

    Welcome to GDSCTL, type "help" for information.

     

    Warning: current GSM name is not set automatically because gsm.ora contains zero or several GSM entries. Use "set gsm" command to set GSM for the session.
    Current GSM is set to GSMORA
    GDSCTL>create shardcatalog -database shcat -user gsmadmin/oracle -chunks 12 -region europe,canada,india -sdb shcat
    Catalog is created
    GDSCTL>

     

    3. Add GSM: GSM will be configured with a dedicated listener and port. This GSM will be attached to the catalog database SHCAT. After the successful GSM configuration, we will start and check the status of GSM.

     

    GDSCTL>add gsm -gsm ShardDir_DG -listener 12121 -pwd oracle -catalog shcat -region europe

    GSM successfully added

    GDSCTL>start gsm -gsm ShardDir_DG

    GSM is started successfully

    GDSCTL>status gsm

    Alias SHARDDIR_DG

    Version 12.2.0.1.0

    Start Date 22-JUL-2017 23:44:44

    Trace Level off

    Listener Log File /home/app/oracle/diag/gsm/ora-ccrep/sharddir_dg/alert/log.xml

    Listener Trace File /home/app/oracle/diag/gsm/ora-ccrep/sharddir_dg/trace/ora_57914_139702432846208.trc

    Endpoint summary (ADDRESS=(HOST=ora-ccrep.localdomain)(PORT=12121)(PROTOCOL=tcp))

    GSMOCI Version 2.2.1

    Mastership Y

    Connected to GDS catalog Y

    Process Id 57990

    Number of reconnections 0

    Pending tasks. Total 0

    Tasks in process. Total 0

    Regional Mastership TRUE

    Total messages published 0

    Time Zone +05:30

    Orphaned Buddy Regions:

    None

    GDS region europe

     

    GDSCTL>

     

    4. Modify the catalog database to update the password, which will be used for remote scheduler agent registrations.

    GDSCTL>modify catalog -agent_password oracle123

    The operation completed successfully

    GDSCTL>

     

    5. Add 2 shard groups to the shardspace: one for the primary and one for the standby database role. The database role will be decided as per the configuration attribute that we use in the command.

    ● “Canada” region for primary shard group

    ● “India” region for standby shard group

     

    GDSCTL>add shardgroup -shardgroup primary_canada_shg -deploy_as primary -region canada
    The operation completed successfully
    GDSCTL>add shardgroup -shardgroup standby_india_shg -deploy_as active_standby -region india
    The operation completed successfully
    GDSCTL>create shard -shardgroup primary_canada_shg -destination ora_c1 -osaccount oracle -ospassword oracle -sys_password oracle
    The operation completed successfully
    DB Unique Name: sh1
    GDSCTL>

    6. Create the shards: This step will create a new database and add it to the spardspace or shardgroup. It also registers the database with GDS. Do not assume that creating the shard will also create the underlying RDBMS database. After shard creation we have to use the “Deploy” command to create the shards on the designated groups. In this step we can add various attributes, such as roles of shards (whether this shard is primary or standby or active standby) and many other options.

     

    GDSCTL>create shard -shardgroup primary_canada_shg -destination ora_c1 -osaccount oracle -ospassword oracle -sys_password oracle
    The operation completed successfully
    DB Unique Name: sh1
    GDSCTL>create shard -shardgroup primary_canada_shg -destination ora_c2 -osaccount oracle -ospassword oracle -sys_password oracle
    The operation completed successfully
    DB Unique Name: sh2
    GDSCTL>create shard -shardgroup standby_india_shg -destination ora_c3 -osaccount oracle -ospassword oracle -sys_password oracle
    The operation completed successfully
    DB Unique Name: sh3
    GDSCTL>create shard -shardgroup standby_india_shg -destination ora_c4 -osaccount oracle -ospassword oracle -sys_password oracle
    The operation completed successfully
    DB Unique Name: sh4

     

    ● We have created 4 shards in total: 2 shards under the primary database category and 2 under standby database category. The unique names of shards will be allocated by Oracle internally as we have configured “system” sharding type.

    7. Review the configuration: At this point we have performed all of the steps prior to deploying the configuration. This configuration can be viewed with various commands.

     

    GDSCTL>config shard

    Name                Shard Group         Status    State       Region    Availability

    ----                -----------         ------    -----       ------    ------------

    sh1                 primary_canada_shg  U         none        canada    -

    sh2                 primary_canada_shg  U         none        canada    -

    sh3                 standby_india_shg   U         none        india     -

    sh4                 standby_india_shg   U         none        india     -

     

     

    ● This lists all the shards of the configuration registered in the sharded catalog database. We need to give some importance on status, state, and availability before and after the sharding deployment.

    GDSCTL>config shardspace

    Shard space Chunks

    ----------- ------

    shardspaceora 12

    GDSCTL>config

    Regions

    ------------------------

    canada

    europe

    india


    GSMs

    ------------------------

    sharddir_dg

     

    Sharded Database

    ------------------------

    shcat

     

    Databases

    ------------------------

    sh1

    sh2

    sh3

    sh4

     

    Shard Groups

    ------------------------

    primary_canada_shg

    standby_india_shg

     

    Shard spaces

    ------------------------

    shardspaceora

     

    Services

    ------------------------

     

    GDSCTL pending requests

    ------------------------

    Command Object Status

    ------- ------ ------

    Global properties

    ------------------------

    Name: oradbcloud

    Master GSM: sharddir_dg

    DDL sequence #: 0

    GDSCTL>config vncr

    Name Group ID

    ---- --------

    192.168.0.33

    GDSCTL>

     

    8. VNCR (“Valid Node Checking for Registration”): Initially released in 11.2.0.4 and later included with Oracle Database 12c, VNCR allows instance registrations to receive registrations only from valid servers. Similarly we will add each shard address to the catalog(s) to ensure they are valid nodes. Adding VNCR is optional, however inviting nodes and attaching to the specific shard group will make deployment smoother.

     

    GDSCTL>add invitednode 192.168.0.80 -group primary_canada_shg

    GDSCTL>add invitednode 192.168.0.90 -group primary_canada_shg

    GDSCTL>add invitednode 192.168.0.100 -group standby_india_shg

    GDSCTL>add invitednode 192.168.0.110 -group standby_india_shg

    GDSCTL>config vncr

    Name Group ID

    ---- --------

    192.168.0.33

    192.168.0.80 primary_canada_shg

    192.168.0.90 primary_canada_shg

    192.168.0.100 standby_india_shg

    192.168.0.110 standby_india_shg

    GDSCTL>

     

     

    9. The Final Deployment: Deploy and review to ensure if the added configuration is correct. There are various situations where deployment may fail.

     

    GDSCTL>deploy

    deploy: examining configuration...

    deploy: deploying primary shard 'sh1' ...

    deploy: network listener configuration successful at destination 'ora_c1'

    deploy: starting DBCA at destination 'ora_c1' to create primary shard 'sh1' ...

    deploy: deploying primary shard 'sh2' ...

    deploy: network listener configuration successful at destination 'ora_c2'

    deploy: starting DBCA at destination 'ora_c2' to create primary shard 'sh2' ...

    deploy: waiting for 2 DBCA primary creation job(s) to complete...

    deploy: waiting for 2 DBCA primary creation job(s) to complete...

    deploy: waiting for 2 DBCA primary creation job(s) to complete...

    deploy: waiting for 2 DBCA primary creation job(s) to complete...

    deploy: waiting for 2 DBCA primary creation job(s) to complete...

    deploy: waiting for 2 DBCA primary creation job(s) to complete...

    deploy: waiting for 2 DBCA primary creation job(s) to complete...

    deploy: waiting for 2 DBCA primary creation job(s) to complete...

    deploy: DBCA primary creation job succeeded at destination 'ora_c1' for shard 'sh1'

    deploy: deploying standby shard 'sh3' ...

    deploy: network listener configuration successful at destination 'ora_c3'

    deploy: starting DBCA at destination 'ora_c3' to create standby shard 'sh3' ...

    deploy: DBCA primary creation job succeeded at destination 'ora_c2' for shard 'sh2'

    deploy: deploying standby shard 'sh4' ...

    deploy: network listener configuration successful at destination 'ora_c4'

    deploy: starting DBCA at destination 'ora_c4' to create standby shard 'sh4' ...

    deploy: waiting for 2 DBCA standby creation job(s) to complete...

    deploy: waiting for 2 DBCA standby creation job(s) to complete...

    deploy: waiting for 2 DBCA standby creation job(s) to complete...

    deploy: waiting for 2 DBCA standby creation job(s) to complete...

    deploy: waiting for 2 DBCA standby creation job(s) to complete...

    deploy: DBCA standby creation job succeeded at destination 'ora_c4' for shard 'sh4'

    deploy: DBCA standby creation job succeeded at destination 'ora_c3' for shard 'sh3'

    deploy: requesting Data Guard configuration on shards via GSM

    deploy: shards configured successfully

    The operation completed successfully

    GDSCTL>

     

    10. Verify the sharding configuration again:

     

    GDSCTL>config shard

    Name Shard Group Status State Region Availability

    ---- ----------- ------ ----- ------ ------------

    sh1 primary_canada_shg Ok Deployed canada ONLINE

    sh2 primary_canada_shg Ok Deployed canada ONLINE

    sh3 standby_india_shg Ok Deployed india READ ONLY

    sh4 standby_india_shg Ok Deployed india READ ONLY

    GDSCTL>

     

    Summary

    In this article, we completed the successful deployment of an Oracle Sharding configuration with 2 shards in the primary role and 2 shards in a standby role using Active Data Guard (the configuration can still use the Data Guard Broker and the Fast Start Fail Over [FSFO] option).

    We implemented the prerequisites for the shards and prepared the sharded catalog database using GSM. There was also some special setup related to the Oracle Scheduler Agent so it can communicate with the master sharded catalog database for remote job execution.

    After all these prerequisites and configuration, we deployed Oracle Sharding successfully. Finally, we used various commands from the GDSCTL utility to check the configuration status of the shards.

     

    Nassyam Basha is a Database Administrator. He has around ten years of experience as a Production Oracle DBA, currently working as Senior Principal Consultant at Data Intensity. He holds a master's degree in Computer Applications from the University of Madras. He is an Oracle 11g Certified master an Oracle ACE Director. He actively participates in Oracle-related forums such as OTN with a status of Super Hero, Oracle support awarded as "Guru" and acting as OTN Moderator and written numerous articles with OTN and on Toad World. He maintains an Oracle technology-related blog, www.oracle-ckpt.com and can be reached at nassyambasha@gmail.com.

    Released: January 29, 2018, 2:43 pm | Updated: April 13, 2018, 8:38 am
    Keywords: Feature | shard


    Copyright © 2018 Communication Center. All Rights Reserved
    All material, files, logos and trademarks within this site are properties of their respective organizations.
    Terms of Service - Privacy Policy - Contact

    Independent Oracle Users Group
    330 N. Wabash Ave., Suite 2000, Chicago, IL 60611
    phone: 312-245-1579 | email: ioug@ioug.org

    IOUG Logo

    Copyright © 1993-2018 by the Independent Oracle Users Group
    Terms of Use | Privacy Policy