Check error on a Shard during execution of DDL commands with recovery

Checking the error on a Shard during DDL command execution with recovery

Case 1: In this case, user attempt to create tablespace on Shard database but one location is not having write permission on shards.

1. Connect with the shard user

SQL> connect example_user/ password1
Connected

2. Try to create a tablespace on shard Database it will apply on all shard (local or remote database)

SQL> create tablespace set tbsset;
Tablespace created.

3. Check the status of command is executed and pending on any shard

GDSCTL> show ddl

id DDL Text                                      Failed shards
-- --------------------------------------------  -------------
1  create user example_user identified by *****
2  create tablespace set tbsset                  shard01 

4. Fetch the detail information about the error

GDSCTL> config shard -shard shard01
Conversion = ':'Name: shard01
Shard Group: dbs1
Status: Ok
State: Deployed
Region: east
Connection string: (DESCRIPTION=(ADDRESS=(HOST=shard01-host)(PORT=1521)
(PROTOCOL=tcp))
(CONNECT_DATA=(SID=shard01)))
SCAN address:
ONS remote port: 0
Disk Threshold, ms: 20
CPU Threshold, %: 75
Version: 18.0.0.0
Failed DDL: create tablespace set tbsset
DDL Error: ORA-02585: create tablepsace set failure, one of its tablespaces not
created
ORA-01119: error in creating database file \'/ade/b/3667445372/oracle/rdbms/dbs/
SHARD01/datafile/o1_mf_tbsset_%u_.dbf\'
ORA-27040: file create error, unable to create file
Linux-x86_64 Error: 13: Permission denied
Additional information: 1 \(ngsmoci_execute\)
Failed DDL id: 2
Availability: ONLINE

5. From this we find the Shard01 is not having write permission, so login to server and give it permission.

6. Run the following command on Shard01 for recover

GDSCTL> recover shard -shard shard01

7. Check with show command:

GDSCTL> show ddl

id DDL Text                                      Failed shards
-- --------------------------------------------  -------------
1  create user example_user identified by *****
2  create tablespace set tbsset

CASE 2: The user attempts to create another tablespace and DDL fails on a shard because there is already an existing local tablespace with the same name.

1. connect with Shard Catalog:

SQL> create tablespace set tbs_set;

2. On checking the Status:

GDSCTL> show ddl

id DDL Text                                      Failed shards
-- --------------------------------------------  -------------
1  create user example_user identified by *****
2  create tablespace set tbsset
3  create tablespace set tbs_set                 shard01 

3. check the complete error detail with config command:

GDSCTL> config shard -shard shard01
Conversion = ':'Name: shard01
……
Failed DDL: create tablespace set tbs_set
DDL Error: ORA-02585: create tablespace set failure, one of its tablespaces not
created
ORA-01543: tablespace \'TBS_SET\' already exists \(ngsmoci_execute\)

4. In this one case drop the tablespace on shard01 and fire the recovery process as done in CASE 1 and other case is drop the new created tablespace on all other shards and create new one on shard database.
Note: We are working on other 2nd case. first one is already done in CASE 1.

5. Now we fired drop and create tablespace command

SQL> drop tablespace set tbs_set;
SQL> create tablespace set tbs_set2

6. Then check the status with show command:

GDSCTL> show ddl

id DDL Text                                      Failed shards
-- --------------------------------------------  -------------
1  create user example_user identified by *****
2  create tablespace set tbsset
3  create tablespace set tbs_set                 shard01
4  drop tablespace set tbs_set
5  create tablespace set tbsset2 


Note: DDLs 4 and 5 are not attempted on shard01 because DDL 3 failed there.

7. To overcome, we need to fire the recovery command.
Skip the 3 DDL command to run on Shard01 with -ignore_first option. In this we have tablespace on all other shards(database) but not on shard01 node.

GDSCTL> recover shard -shard shard01 –ignore_first

8. Now it come to 4 DDL command that is drop command of same tablespace but it will again produced error because of tablespace not present on Shard01.

GDSCTL> show ddl
id DDL Text                                Failed shards
-- --------------------------------------  -------------
1  create user sidney identified by *****
2  create tablespace set tbsset
3  create tablespace set tbs_set
4  drop tablespace set tbs_set             shard01
5  create tablespace set tbsset2 

9. Skip the 4 DDL command on Shard01 as above command because tablespace is already not present on shard01.

GDSCTL> recover shard -shard shard01 –ignore_first

10. Check the status of all commands and its executed on all shard(nodes or databases) without error:

GDSCTL> show ddl
id DDL Text Failed shards
-- -------------------------------------- -------------
1 create user sidney identified by *****
2 create tablespace set tbsset
3 create tablespace set tbs_set
4 drop tablespace set tbs_set
5 create tablespace set tbsset2

Note: No failed Shards means ddl command executed on all shards

Advertisements

DDL Execution in Oracle Sharded Database

DDL Execution in Oracle Sharded Database
The Shard catalog database has copies of all the objects present in sharded database. If catalog validate the DDL then it applied on all shards(database).
If new shard is added then from catalog database it will propagate the ddls to all the shards (databases).

Two ways you can issue DDLs in an SDB:

1. GDSCTL sql command.
When you execute a command , all shards are updated and return status of execution. A master copy is present in shard catalog.


GDSCTL> sql "create tablespace set tbsset"

2. Connect to the shard catalog database using SQL*Plus using the GDS$CATALOG.sdbname service
When you execute the command, local shards response on execution and all other shards are working inbackgroud.

SQL> create tablespace set tbsset;

For SQL PLUS two type of object is created. SDB object and local object. Local objects are the object that exist in Shard catalog and used for administrative purpose.
Note: type of object (SDB or local) that is created in a SQL*Plus session depends on whether the SHARD DDL mode is enabled in the session. Mode is enabled default if connect with catalog database of SDB user.

--For Disable create local object
alter session disable shard ddl;

--For enable create shared objects
alter session enable shard ddl;

Note: If you direct connected with shared user or catalog, you donot need to enable shared it automatic enabled.

Check Status of DDL Commands:
SHOW DDL and CONFIG SHARD is used from GDSCTL. It will tell you failed shards on which command is failed.

GDSCTL> show ddl
id DDL Text Failed shards
-- ------------------------------------------- -------------
1 create user example_user identified by *****

Note: IF DDL failed on one shard all further DDL are blocked until failure is resolved (GDSCTL recover shard)

Example For checking the DDL command executed on all shreds:

-- Enable the Shard DDL Session
SQL> alter session enable shard ddl;

-- Fire the Command for creating a User
SQL> CREAT USER example_user IDENTRIFIED BY password1;
CREATE USER example_user IDENTRIFIED BY password1
*
ERROR at line 1:
ORA-00922: missing or invalid option

-- Check the Command show in List.
GDSCTL> show ddl
id DDL Text Failed shards
-- -------- -------------

Note: it show command is not listed means not executed in any Shard(database)

--Fire the create user command again
SQL> CREATE USER example_user IDENTIFIED BY password1;
User created.

-- Show the command is executed successfully
GDSCTL>show ddl
id DDL Text Failed shards
-- ------------------------------------------- -------------
1 create user example_user identified by *****

Sharding Database in Oracle

Sharding Database in Oracle

Sharding is a data tier architecture in which data is horizontally partitioned across independent databases. Each database in such a configuration is called a shard. All of the shards together make up a single logical database, which is referred to as a sharded database (SDB). Oracle Sharding is implemented based on the Oracle Database partitioning feature.

SHARDED DATABASE:
Database is host independently with all resources like CPU, disk, memory as standalone server. This database is called shared.
All together make a single logical database called sharded database.
SHAREDED TABLE:
Splitting a table across shards so that each shard contains the table with the same columns but a different subset of rows.
A table split up in this manner is also known as a sharded table.
A sharded table is a table that is partitioned into smaller and more manageable pieces among multiple databases, called shards.

Note: Oracle Database supports scaling up to 1000 shards.

Terms Used in Shard Environment

Sharded database (SDB) – A SDB is collection of shards represent as a single logical Oracle Database.
Shards – independent physical Oracle databases
Global service – database services that provide access to data in an SDB
Shard catalog – Stored SDB configuration data and manage centralized of sharded database. shard catalog also contains the master copy of all duplicated tables in an SDB. The shard catalog uses materialized views to automatically replicate changes to duplicated tables in all shards.
Shard directors – network listeners that enable high performance connection routing based on a sharding key.
Connection pools – at runtime, act as shard directors by routing database requests across pooled connections
Management interfaces – GDSCTL (command-line utility) and Oracle Enterprise Manager (GUI)

Defining and creating process of Objects in Shared

Shared table
A sharded table is a table that is partitioned into smaller and more manageable pieces among multiple databases, called shards.

Example
creates a sharded table having horizontally partitioning the table across shards based on sharding key cust_id:


CREATE SHARDED TABLE customers
( cust_id NUMBER NOT NULL
, name VARCHAR2(50)
, address VARCHAR2(250)
, region VARCHAR2(20)
, class VARCHAR2(3)
, signup DATE
CONSTRAINT cust_pk PRIMARY KEY(cust_id)
)
PARTITION BY CONSISTENT HASH (cust_id)
PARTITIONS AUTO
TABLESPACE SET ts1;

Note: Global index is not supported in Shared.
Oracle Sharding creates and manages tablespaces as a unit called a tablespace set.
The PARTITIONS AUTO clause specifies that the number of partitions should be automatically determined.

CHUNK
The unit of data migration between shards is a chunk. A chunk is a set of tablespaces that store corresponding partitions of all tables in a table family.

Table Family
parent-child relationship between database tables with a referential constraint (foreign key). A set of such tables is referred to as a table family. A
table in a table family that has no parent is called the root table.
In this example customer table is parent and order table is child

CREATE SHARDED TABLE Customers
( CustNo NUMBER NOT NULL
, Name VARCHAR2(50)
, Address VARCHAR2(250)
, region VARCHAR2(20)
, class VARCHAR2(3)
, signup DATE
)
PARTITION BY CONSISTENT HASH (CustNo)
PARTITIONS AUTO
TABLESPACE SET ts1
;
CREATE SHARDED TABLE Orders
( OrderNo NUMBER
, CustNo NUMBER NOT NULL
, OrderDate DATE
)
PARENT Customers
PARTITION BY CONSISTENT HASH (CustNo)
PARTITIONS AUTO
TABLESPACE SET ts1
;

Duplicate Table
A table with the same contents in each shard is called a duplicated table. It simple presents on all the database(shards)
Oracle Sharding synchronizes the contents of duplicated tables using Materialized View Replication. A duplicated table on each shard is represented by a materialized view.
Note: refresh frequency of all duplicated tables is controlled by the database initialization parameter SHRD_DUPL_TABLE_REFRESH_RATE. Default value is 60 seconds

CREATE DUPLICATED TABLE Products
( StockNo NUMBER PRIMARY KEY
, Description VARCHAR2(20)
, Price NUMBER(6,2))
);

Considering the init parameters while creating Oracle database

Take care Parameters while creating the new Database in Oracle

Following are the parameter you need to take care before delivery the database to production environment for avoid future downtime.
Because in production environment once it live, you need to change this parameter you need to bounce the database. In Critical environment, you never ignore this parameters

DB_FILES = (default 200 recommended 1000)
Set of maximum number of files that in a database in future you need to add more data files in database. Handle error ORA-00059: maximum number of DB_FILES exceeded
PROCESSES = (default 150 recommended 600)
Set of maximum number of processes that can be started by instance. Handle error ORA-00020: maximum number of processes (%s) exceeded
SESSIONS = (default 172 )
It automatically set from the processes parameter when you change the processes parameter with alter system command. Handle error ORA-00018.Maximum number of sessions exceed
TRANSACTIONS = default 189
It automatically change when you configure the processes parameter
SESSION_CACHED_CURSORS = default 50 increase to 300
In a session you can open maximum number of cursor.
UNDO_RETENTION = default 900
Please increase the undo retention to 1800, it will increase performance and overcome ORA-01555 Snapshot Too Old error in alert log.
SMTP_OUT_SERVER =
If you need to configure something from mail in future then setup the parameter in advance.
OPEN_CURSORS = 300 default
OPEN_CURSORS specifies the maximum number of open cursors (handles to private SQL areas) a session can have at once and overcome with error ORA-01000: maximum open cursors exceed
AUDIT_TRAIL = NONE
Check the audit trail parameter by default it is set to DB level. If you need the audit then remains enable otherwise disable it in advance.

Memory parameter
In Oracle we have 3 major memory components:

MEMORY TARGET is that component which manages all other component means SGA + PGA. It manage according to run time data basis. It is best method. you can set in with two following parameter.
If you set memory target then always keep MEMORY_MAX_TARGET with some greater value so that you don’t need to restart the database in future if you need to add more memory in MEMORY_TARGET.

MEMORY_TARGET
Actual memory consumed by Oracle instance for its processes.
MEMORY_MAX_TARGET
Always set greater value then memory_target parameter. So, in future you can increase memory target without restart the database.

SGA Target having other components like shared pool, large pool, java pool, db cache etc. It manage the size of all the component in run time as needed.

SGA_TARGET
SGA_TARGET specifies the total size of all SGA components.
SGA_MAX_SIZE
SGA MAX SIZE is set greater value than SGA TARGET Parameter, So in future you can increase SGA size without restart the database.

PGA_AGGREGATE_TARGET
PGA_AGGREGATE_TARGET specifies the target aggregate PGA memory available to all server processes attached to the instance.

Load file to table with SQL Loader

Load data from file to Oracle database with SQL Loader

It is client based utility for loading the data into the database from files with help of SQL Loader engine.
You can load the text file, csv file into database at client side where SQL loader engine is running.

SQLLOADER is used control file which will define the format of data read from datafile with other parameters.
Control file defines the format and method in which data is loaded and you can also use function while inserting data and define character set as you wish.
SQLLoader is powerful tool to read file and load into the database.

Method to use the SQLLOADER

sqlldr username@server/password control=loader.ctl

Following is the example of Control file:
Following example show us to build a control file to load data from file mydata.csv. It is comma separated file and load itno the emp table. Last line give us column name for inserting data.

load data
infile 'c:\mydata.csv'
into table emp1
fields terminated by "," optionally enclosed by '"'
( empno, empname, sal, deptno )

DATA of mydate.csv file:

mydate.csv
20001,Rakesh,10000,10
20006,Shweta,20000,20
20008,Anil,20000,20

Note:
LOAD DATA statement tells SQL*Loader that this is the beginning of a new data load.
INFILE specifies the name of a datafile containing data that you want to load.
INTO TABLE enables you to identify tables, fields, and data types.
FIELDS TERMINATED BY “,” This is the symbol used to separate values in your CSV file.
OPTIONALLY ENCLOSED BY ‘”‘ This allows CSV values to be enclosed in double-quotes.
(column name)List of columns to be loaded. The order of this list comes from the CSV file and the column names come from the table.

Explain the Control file syntax:
LOAD DATA statement tells SQL*Loader that this is the beginning of a new data load.
INFILE clause specifies the name of a datafile containing data that you want to load.
BADFILE clause specifies the name of a file into which rejected records are placed.
DISCARDFILE clause specifies the name of a file into which discarded records are placed.
APPEND clause is one of the options you can use when loading data into a table that is not empty. INSERT is used in which table is empty.
INTO TABLE clause enables you to identify tables, fields, and data types.
WHEN clause specifies one or more field conditions.
TRAILING NULLCOLS clause tells SQL*Loader to treat any relatively positioned columns that are not present in the record as null columns.
FIELD LIST which provides information about column formats in the table being loaded.

-- This is a sample control file
Options (SKIP = 1)
LOAD DATA
INFILE 'sample.dat'
BADFILE 'sample.bad'
DISCARDFILE 'sample.dsc'
APPEND
INTO TABLE emp
WHEN (57) = '.'
TRAILING NULLCOLS
(hiredate SYSDATE,
deptno POSITION(1:2) INTEGER EXTERNAL(2)
NULLIF deptno=BLANKS,
job POSITION(7:14) CHAR TERMINATED BY WHITESPACE
NULLIF job=BLANKS "UPPER(:job)",
mgr POSITION(28:31) INTEGER EXTERNAL
TERMINATED BY WHITESPACE, NULLIF mgr=BLANKS,
ename POSITION(34:41) CHAR
TERMINATED BY WHITESPACE "UPPER(:ename)",
empno POSITION(45) INTEGER EXTERNAL
)

Example to Load CSV file into database table:

1. Place the file in D:\ folder.
Data.csv file:

NAME , BALANCE, START_DT
"Jones, Joe" , 14 , "Jan-12-2012 09:25:37 AM"
"Loyd, Lizy" , 187.26 , "Aug-03-2004 03:13:00 PM"
"Smith, Sam" , 298.5 , "Mar-27-1997 11:58:04 AM"
"Doyle, Deb" , 5.95 , "Nov-30-2010 08:42:21 PM"

1datafile

2. Create a table in Oracle according to data in CSV file.

create table scott.test(name varchar2(50),amount number,start_date date) ;

2createtable

3. Create a control file
Control file: Create a control file which tell SQL loader about all the details.

————————————————————
— SQL-Loader Basic Control File
————————————————————
options ( skip=1 )
load data
infile ‘Data.csv’
truncate into table scott.test
fields terminated by “,”
optionally enclosed by ‘”‘
( name
, amount
, start_date DATE “Mon-DD-YYYY HH:MI:SS am”
)

Note: Detail of control file as follows:
OPTIONS (SKIP =1 ) = Skip the header row of the CSV file. If there is no header row it would be: (skip=0).
LOAD DATA = This is the command to start loading data.
INFILE ‘DATA.CSV’ = This is the name of your CSV data file.
TRUNCATE INTO TABLE = This is the schema and name of your Oracle table in which data is loaded. The “truncate” specifies that the existing data in the table will be truncated or erased prior to the load.
FIELDS TERMINATED BY “,” = This is the symbol used to separate values in your CSV file.
OPTIONALLY ENCLOSED BY ‘”‘ = This allows CSV values to be enclosed in double-quotes.
( )= List of columns to be loaded. The order of this list comes from the CSV file and the column names come from the table.

3controlfile.JPG

4. Run the SQLLDR utility for import the datafile into table

@echo off
sqlldr 'scott/tiger@my_database' control='Control.ctl' log='Results.log'
pause

4runsqlloader.JPG

5. Check the table values

select * from scott.test.

5verifydataintable

Error SQL*Loader-466: Column EMPNO does not exist in table EMP1.
D:\>sqlldr scott
control = loader.ctl
Password:
SQL*Loader: Release 11.2.0.2.0 – Production on Tue May 1 12:11:34 2018
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
SQL*Loader-466: Column EMPNO does not exist in table EMP1.
Solution:
Match the exact table and column number according to table definition with control file column list (last line in example)
desc scott.emp1;

Error SQL*Loader-500: Unable to open file (control.ctl)
D:\>sqlldr scott
control = control.ctl
Password:
SQL*Loader: Release 11.2.0.2.0 – Production on Tue May 1 12:11:20 2018
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
SQL*Loader-500: Unable to open file (control.ctl)
SQL*Loader-553: file not found
SQL*Loader-509: System error: The system cannot find the file specified.
Solution:
Check correct path of control file. It is present on correct path or directory from is running. Or you may mention the wrong name of control file.

SQL*Loader-601: For INSERT option, table must be empty. Error on table EMP
D:\>sqlldr scott
control = loader.ctl
Password:
SQL*Loader: Release 11.2.0.2.0 – Production on Tue May 1 12:19:28 2018
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
SQL*Loader-601: For INSERT option, table must be empty. Error on table EMP
Solution:
Need to modify the control file it you want to append the data into the table other wise use truncate for truncate the old data before going to proceed further insert into table.
Choose from anyone and modified the control file according
append into table emp – append means add the data into existing table
truncate into table emp – truncate means add truncate the old existing data in table and load new one.

Handle the In-Doubt Transactions in Distributed Database

Handle the In-Doubt Transactions

Following are the error messages inform applications when there are problems with distributed transactions.

ORA-02050: transaction ID rolled back,
some remote dbs may be in-doubt
ORA-02053: transaction ID committed,
some remote dbs may be in-doubt
ORA-02054: transaction ID in-doubt

Manual fix the in-doubt transactions cases:
1. In-doubt transaction locks data that is required by other transactions. This situation occurs when the ORA-01591 error message.
2. An in-doubt transaction prevents the extents of an undo segment from being used by other transactions. The first portion of the local transaction ID of an in-doubt distributed transaction corresponds to the ID of the undo segment, as listed by the data dictionary view DBA_2PC_PENDING.

Example of handling the In-Doubt Transactions
In this example we are taking 3 server HR, SALES and Warehouse.

Major we will identified which Server is Global coordinator , commit point Site, Database server and Client in transactions. This information will help us to identified about the status of transaction is committed or rollback.

Terms used in Distributed Environment:
Clients: Client which refer information from another database node.
Database Server: Database Server which host a database from which a client request data.
Local Coordinator: A node that refer to another node for data to complete its transaction.
Global coordinator: Node from which transaction is originating.
Commit Point: System always choose one node to be commit point site.

Following are the steps to handle the in-doubt transaction

1. Getting the following error alert:

ORA-01591: lock held by in-doubt distributed transaction 1.21.17

2. Trace the DBA_2PC_PENDING
WAREHOUSE Server: Fire the following command on WAREHOUSE Server with transaction id got in error.

SELECT * FROM DBA_2PC_PENDING WHERE LOCAL_TRAN_ID = '1.21.17';

Column Name            Value
---------------------- --------------------------------------
LOCAL_TRAN_ID          1.21.17
GLOBAL_TRAN_ID         SALES.EXAMPLE.COM.55d1c563.1.93.29
STATE                  prepared
MIXED                  no
ADVICE
TRAN_COMMENT           Sales/New Order/Trans_type 10B
FAIL_TIME              31-MAY-91
FORCE_TIME
RETRY_TIME             31-MAY-91
OS_USER                SWILLIAMS
OS_TERMINAL            TWA139:
HOST                   system1
DB_USER                SWILLIAMS
COMMIT#


Note:
1. Global id and local id is different, So it is not the global coordinator
2. State column is prepared means server wait for the coordinator to send commit or roll-back request.
3. Comment or advice column: information can helps you decide whether to commit or roll-back but its blank in my case.

3. Trace the session tree, query the DBA_2PC_NEIGHBORS view on each node.
Warehouse: Fire the following command to trace the session tree on warehouse server. IN_OUT and INTERFACE COLUMNS helps for identified session tree and commit point site.

SELECT * FROM DBA_2PC_NEIGHBORS WHERE LOCAL_TRAN_ID = '1.21.17' ORDER BY SESS#, IN_OUT;

Column Name             Value
---------------------- --------------------
LOCAL_TRAN_ID          1.21.17
IN_OUT                 in
DATABASE               SALES.EXAMPLE.COM
DBUSER_OWNER           SWILLIAMS
INTERFACE              N
DBID                   000003F4
SESS#                  1
BRANCH                 0100


Note:
IN_OUT column:
IN – Your node is a server of another node.
Lists the name of the client database that connected to your node.
Lists the local account for the database link connection that corresponds to the in-doubt transaction.
OUT -Your node is a client of other servers.
Lists the name of the database link that connects to the remote node.
Lists the owner of the database link for the in-doubt transaction.
INTERFACE column tells whether the local node or a subordinate node is the commit point site. C means commit site.

4. Try both another query on other SERVER/NODE for checking the status of pending transactions.
SALES Server: Run following query on SALES server.

SELECT * FROM DBA_2PC_PENDING WHERE GLOBAL_TRAN_ID = 'SALES.EXAMPLE.COM.55d1c563.1.93.29';

Column Name            Value
---------------------- --------------------------------------
LOCAL_TRAN_ID          1.90.29
GLOBAL_TRAN_ID         SALES.EXAMPLE.COM.55d1c563.1.93.29
STATE                  prepared
MIXED                  no
ADVICE
TRAN_COMMENT           Sales/New Order/Trans_type 10B
FAIL_TIME              31-MAY-91
FORCE_TIME
RETRY_TIME             31-MAY-91
OS_USER                SWILLIAMS
OS_TERMINAL            TWA139:
HOST                   system1
DB_USER                SWILLIAMS
COMMIT#


4.1 Fire this on Sale SERVER with DBA_2PC_NEIGHBORS

SELECT * FROM DBA_2PC_NEIGHBORS WHERE GLOBAL_TRAN_ID = 'SALES.EXAMPLE.COM.55d1c563.1.93.29' ORDER BY SESS#, IN_OUT;

Warehouse:
Column Name             Value
---------------------- ----------------------
LOCAL_TRAN_ID          1.93.29
IN_OUT                 OUT
DATABASE               WAREHOUSE.EXAMPLE.COM
DBUSER_OWNER           SWILLIAMS
INTERFACE              N
DBID                   55d1c563
SESS#                  1
BRANCH                 1

HQ: 
Column Name            Value
---------------------- -----------------
LOCAL_TRAN_ID          1.93.29
IN_OUT                 OUT
DATABASE               HQ.EXAMPLE.COM
DBUSER_OWNER           ALLEN
INTERFACE              C
DBID                   00000390
SESS#                  1
BRANCH                 1


Note:
1. First Query show us Sales is global coordinator because it is having same local or global tran id.
2. Second query show us Two outbound connections are established from this node, but no inbound connections. sales is not the server of another node.
3. HQ Server is the commit point site because INTERFACE column has C value.

5. Checking the Status of Pending Transactions at HQ

SELECT * FROM DBA_2PC_PENDING@hq.example.com WHERE GLOBAL_TRAN_ID = 'SALES.EXAMPLE.COM.55d1c563.1.93.29';

Column Name            Value
---------------------- --------------------------------------
LOCAL_TRAN_ID          1.45.13
GLOBAL_TRAN_ID         SALES.EXAMPLE.COM.55d1c563.1.93.29
STATE                  COMMIT
MIXED                  NO
ACTION
TRAN_COMMENT           Sales/New Order/Trans_type 10B
FAIL_TIME              31-MAY-91
FORCE_TIME
RETRY_TIME             31-MAY-91
OS_USER                SWILLIAMS
OS_TERMINAL            TWA139:
HOST                   SYSTEM1
DB_USER                SWILLIAMS
COMMIT#                129314


Note:
STATE Column show us that its commit value.
COMMIT# column show the commit id.

6. STATE Column show us that Commit Site performed the Commit operation it’s means we need to commit the transaction on other nodes.
Manually do commit operation on other nodes:

SQL> CONNECT SYS@sales.example.com AS SYSDBA
SQL> COMMIT FORCE 'SALES.EXAMPLE.COM.55d1c563.1.93.29';

SQL> CONNECT SYS@warehouse.example.com AS SYSDBA
SQL> COMMIT FORCE 'SALES.EXAMPLE.COM.55d1c563.1.93.29';

For understand Distributed Database, Please ClickDistributed Database

In-Doubt Transactions in Distributed Database

In-Doubt Transactions in Distributed Database

The two-phase commit mechanism ensures that all nodes either commit or perform a rollback together. A transaction becomes in-doubt if the two-phase commit mechanism fails. RECO process automatically resolves in-doubt transactions when the system, network, or software problem is resolved.

Manual Resolution of In-Doubt Transactions
Need to resolve an in-doubt transaction manually in the following cases:
–The in-doubt transaction has locks on critical data or undo segments.
–The cause of the system, network, or software failure cannot be repaired quickly.

Viewing Information About Distributed Transactions

DBA_2PC_PENDING view to determine the global commit number for a particular transaction ID. Lists all in-doubt distributed transactions. The view is empty until populated by an in-doubt transaction. After the transaction is resolved, the view is purged.

COL LOCAL_TRAN_ID FORMAT A13
COL GLOBAL_TRAN_ID FORMAT A30
COL STATE FORMAT A8
COL MIXED FORMAT A3
COL HOST FORMAT A10
COL COMMIT# FORMAT A10
SELECT LOCAL_TRAN_ID, GLOBAL_TRAN_ID, STATE, MIXED, HOST, COMMIT# FROM DBA_2PC_PENDING;

LOCAL_TRAN_ID GLOBAL_TRAN_ID  STATE  MIX  HOST     COMMIT#
------------- --------------  ------ ---  -------- ---------
1.15.870      1.15.870        commit no   dlsun183 115499


OUTPUT:
that local transaction 1.15.870 has been committed on this node,but it may be pending on one or more other nodes as per COMMIT State.
The local and global has same value so node is global coordinator.

Note: When LOCAL_TRAN_ID and GLOBAL_TRAN_ID value is same then node is the global coordinator of the transaction.
LOCAL_TRAN_ID : Local transaction identifier
GLOBAL_TRAN_ID : Global database identifier
STATE : –Collecting: node is currently collecting information from other database servers before it can decide whether it can prepare.
–Prepared: Node has prepared and holding locks on resources. May or not acknowledge to local coordinator
–Committed: Node has committed the transaction but other node may or may not done.
–Forced Commit: Administrator manually forced to commit the pending transaction at local node.
–Forced rollback: A pending transaction forced to rollback at local node.
MIXED: YES means part of transaction committed on one node and rollback at other node.
TRAN_COMMENT: Transaction comment
HOST: Hostname
COMMIT# : Global commit number for committed transaction

DBA_2PC_NEIGHBORS :Lists all incoming and outgoing in-doubt distributed transactions. It also indicates whether the local node is the commit point site in the transaction.
LOCAL_TRAN_ID: Local transaction identifier
IN_OUT: IN for incoming transactions; OUT for outgoing transactions
DATABASE: For incoming transactions, the name of the client database for outgoing transactions, the name of the database link used to access information on a remote server.
DBUSER_OWNER: For incoming transactions, the local account used to connect, for outgoing transactions: the owner of the database link.
INTERFACE: C is a commit message; N is either indicating a prepared state or read-only commit.

COL LOCAL_TRAN_ID FORMAT A13
COL IN_OUT FORMAT A6
COL DATABASE FORMAT A25
COL DBUSER_OWNER FORMAT A15
COL INTERFACE FORMAT A3
SELECT LOCAL_TRAN_ID, IN_OUT, DATABASE, DBUSER_OWNER, INTERFACE FROM DBA_2PC_NEIGHBORS

LOCAL_TRAN_ID IN_OUT DATABASE                  DBUSER_OWNER    INT
------------- ------ ------------------------- --------------- ---
1.15.870      out     SALES.EXAMPLE.COM        SYS             C

For more info: Example for Handling the In-Doubt transaction manually