Monday, November 28, 2016

Quick Network Test for MapR Cluster.

                                        Quick Network Test for MapR Cluster.


  • Below is script which pick one set of nodes as clients and uses utility such as rpctest to open a single connection to another node ( from set of nodes) as a server and push data as fast as possible.  In below script i am mentioning both nodes of the cluster in server as well as client so that after first iteration the roles are reversed to connect in the opposite direction and push data. This is done for every single node in the cluster for both sending and receiving data.  
  • Note : if nodes have multiple interfaces then you need to test each interface .


In below script add all hostnames/Ip of the cluster in both Array's(half1 and half2) 
_______________________________________________________________________________________

# Define array of server hosts (half of all hosts in cluster)
half1=(10.10.70.109 10.10.70.110)

for node in "${half1[@]}"; do
  #ssh -n $node /root/iperf -s -i3&  # iperf alternative test, requires iperf binary pushed out to all nodes like rpctest
  ssh -n $node /opt/mapr/server/tools/rpctest -server &
done
echo Servers have been launched
sleep 10 # let the servers set up

# Define 2nd array of client hosts (other half of all hosts in cluster)
half2=(10.10.70.109 10.10.70.110)

for clientnode in "${half2[@]}"; do
  for servernode in "${half1[@]}"; do
    echo "Launching RPC test:$clientnode-$servernode"
    ssh -n $clientnode "rm -rf  rpctest-$clientnode-to-$servernode.log"
    ssh -n $clientnode "/opt/mapr/server/tools/rpctest -client 50000 $servernode > rpctest-$clientnode-to-$servernode.log" & # Concurrent mode, if using sequential mode remove "&"
  done
done
echo Clients have been launched

wait $! # comment out for Sequential mode
sleep 20

echo "The network bandwidth (mb/s is MB/sec) between nodes i.e clientnode-to-servernode (Baseline 1GbE=125MB/s, 10GbE=1250MB/s)"
tmp=${half2[@]}
clush -w ${tmp// /,} 'grep -i -e ^Rate -e error rpctest*log' 

tmp=${half1[@]}
clush -w ${tmp// /,} pkill rpctest #Kill the servers
_______________________________________________________________________________________

For the script to run you need passworless SSH setup and Clush shell to retrieve the results. Please refer to below blogs for details on setup.


Executing the script logs below details RPC test between client node to server node, It is ok to ignore RPC test results where client and server are on same node . Main value is from client and server nodes sitting across different racks or DC .

[root@node10 ~]# ./networktest.sh 
Servers have been launched
Launching RPC test:10.10.70.109-10.10.70.109
Launching RPC test:10.10.70.109-10.10.70.110
Launching RPC test:10.10.70.110-10.10.70.109
Launching RPC test:10.10.70.110-10.10.70.110
Clients have been launched

The network bandwidth (mb/s is MB/sec) between nodes i.e clientnode-to-servernode (Baseline 1GbE=125MB/s, 10GbE=1250MB/s)
10.10.70.109: rpctest-10.10.70.109-to-10.10.70.109.log:Rate: 977.29 MB/s, time: 53.6491 sec, #rpcs 800031, rpcs/sec 14912.3
10.10.70.109: rpctest-10.10.70.109-to-10.10.70.110.log:Rate: 763.52 MB/s, time: 68.6703 sec, #rpcs 800031, rpcs/sec 11650.3
10.10.70.110: rpctest-10.10.70.110-to-10.10.70.109.log:Rate: 828.08 MB/s, time: 63.3162 sec, #rpcs 800031, rpcs/sec 12635.5
10.10.70.110: rpctest-10.10.70.110-to-10.10.70.110.log:Rate: 788.81 MB/s, time: 66.468 sec, #rpcs 800031, rpcs/sec 12036.3

Above few lines give us a good idea on the rate we can expect across 2 different nodes in the cluster and weed out slacker nodes in the cluster which can be cause of bottleneck in the cluster.



Saturday, November 26, 2016

Interacting with Hadoop Cluster ( File Client) Part 1

                                Interacting with Hadoop Cluster ( File Client)



We interact with hadoop cluster for various reasons like loading data into hadoop cluster, reading data from it, running jobs etc . Unknowingly we use Fileclient in the backend to do these operations,  So what is Fileclient ? Fileclient is an interface which an application or clients uses to communicate with the server   (Hadoop Cluster ) i.e When an application/client needs to communicate with the cluster it links to the library provided by FileClient package to initiate an open interface between the Client and Servers for communication .

Architecture:

Fileclient can accept requests from Java or C applications and plugs in the request to respective layer to proceed with the requests. Say when any user (client) tries to read a File in Hadoop cluster, File Op reaches C Layer and then its forwarded to the common layer which then in turn is passed to RPC layer for it to make remote system calls to process the requests and give back the result to the user once it has the info requested.






Scope of doc ( file client ) : FileClient is a huge topic to discuss, scope of this blog is to understand what file services are served by FileClient and components for it to do so quickly and efficiently.
   When an application/client needs to communicate with the cluster it links to the library provided by FileClient package to initiate an open interface, Lets understand why Open interface (libMapRClient) is needed between clients and Server for File operations.






Above figure shows the flow of control when Client would like to identify/list a file in cluster. There are 3 steps :

Client :  Client takes inputs as file name to perform any operation on them i.e read, write, open etc . Client only understand file name as an input.

libMapRClient : This library is mainly used for mapping file to FID and vise-versa.

Server : Server identifies files only by FID ( File Identifier )

We know file server only identifies files by FID but what is FID?  What is FID comprised of?
FID is short form for file identifier, FID is the number by which fileserver identifies different files .  This is made up of three parts separated by “.” container number , inode number and unique number identifier.

Below command shows  how /Test file name is interpreted by FID “2049.51.1180684” as primary chunk.





While if I would like to know the file name for the  fid “2049.51.1180684” . We can run below commands to get the info.




Now when the above commands were run there is number of steps fileclient follows to get the fid for a file and vice-versa. To see detailed log messages listing every step file client follows we can run the same command with Debug enabled as below to see every step involved in FileClient till it gets the corresponding FID for /Test file

hadoop mfs -Dfs.mapr.trace=debug  -ls /Test


Explaining every step is out of scope for this doc but I plan to write future posts listing and explaining all steps involved in file client writing  (Loading) files to the cluster or reading files from the cluster.   

File Client has 3 type of cache, which it stores in its memory to make the operation quicker during the life of the process.  They are :

1) CID cache :  All the CID ( Contianer ID) along with node on which they reside and the node on which Master CID reside is cached.

Like in log lines below from earlier debug command we see Cidcache created new entry for 2049 container which it comes across long with IP “172.16.122.159” on which this CID resides.

2015-05-28 15:08:56,7661 DEBUG Cidcache fs/client/fileclient/cc/cidcache.cc:353 Thread: 12608 Created new entry for cid 2049
2015-05-28 15:08:56,7661 DEBUG Cidcache fs/client/fileclient/cc/cidcache.cc:811 Thread: 12608 Setting srcCluster my.cluster.com, Cid 2049
2015-05-28 15:08:56,7662 DEBUG Cidcache fs/client/fileclient/cc/cidcache.cc:119 Thread: 12608 PopulateEntry: For CID 2049 received host IP 172.16.122.159


2) Fid Cache :  All the FID with their co-responding files are cached by FileClient for it to look it up later quickly.

We got below log lines from earlier debug command which does confirm Fid “2049.573.1051710” is added to Fidcache for /Test file

2015-05-28 15:08:56,7677 DEBUG Client fs/client/fileclient/cc/client.cc:2630 Thread: 12608 PathWalk: Adding /Test Fid 2049.573.1051710 to Fidcache
2015-05-28 15:08:56,7677 DEBUG Client fs/client/fileclient/cc/client.cc:2634 Thread: 12608 PathWalk: WalkDone File /Test, resp fid 2049.573.1051710
2015-05-28 15:08:56,7677 DEBUG JniCommon fs/client/fileclient/cc/jni_MapRClient.cc:1049 Thread: 12608  -- Exit JNI getattr -- /Test

While few lines later in log lines its confirms when file /Test is tried to be opened FileClient does a lookup in FID cache and succeeds in cache hit.

2015-05-28 15:08:56,8119 DEBUG Client fs/client/fileclient/cc/client.cc:1321 Thread: 12608 >Open: file /Test
2015-05-28 15:08:56,8119 DEBUG Client fs/client/fileclient/cc/client.cc:2514 Thread: 12608 Lookupfid : start = /Test, end = (nil)
2015-05-28 15:08:56,8119 DEBUG Client fs/client/fileclient/cc/client.cc:2564 Thread: 12608 Path /Test fid:2049.573.1051710 found in fidcache
2015-05-28 15:08:56,8120 DEBUG Inode fs/client/fileclient/cc/inode.cc:225 Thread: 12608 itab:fill  2049.573.1051710 cache hit, copied fattrs from 0x7f993c940ab0

3) FID map :  This stores the Map for the FID .


Friday, November 25, 2016

File Client Debug logging Part 2

                                          File Client Debug logging

To access MapR's filesystem MapR provides a client implementation known as FileClient.  This client is necessary for all access to MapR-FS and is used by the Hadoop command line, ecosystems and hadoop (Yarn) among many other applications.. 

You may find that access to MapR-FS is not behaving as expected and require further details about the client behavior to debug the issue further.  Example  Hadoop commands  hang, I/O error , Stale file handle etc

 The method for enabling the debug logging within MapR's FileClient differs depending on the client or application you are using to access MapR-FS.  This Blog will cover a different methods for getting debug level logging from FileClient when accessing MapR-FS. 

1.  Using the hadoop command line
If the access issue is observed using the hadoop command line the debug logging can be enabled directly with the command execution.  For example, the original command is:

# hadoop fs -ls /

To enable debug logging add '-Dfs.mapr.trace=debug' to the command syntax and re-run the command.  Ex:


hadoop fs -Dfs.mapr.trace=debug -ls /

2016-11-25 16:50:15,1428 DEBUG JniCommon fs/client/fileclient/cc/jni_MapRClient.cc:537 Thread: 4544  -- Enter JNI OpenClient -- 
2016-11-25 16:50:15,1482 DEBUG Client fs/client/fileclient/cc/client.cc:6571 Thread: 4544 shmemsize: 2560 raSize: 0
2016-11-25 16:50:15,1550 DEBUG Client fs/client/fileclient/cc/client.cc:6588 Thread: 4544 connect timeout value : 0
2016-11-25 16:50:15,1550 DEBUG Client fs/client/fileclient/cc/client.cc:6602 Thread: 4544 User buffersize = 1024
2016-11-25 16:50:15,1550 DEBUG Client fs/client/fileclient/cc/client.cc:6610 Thread: 4544 Group buffersize = 1024
2016-11-25 16:50:15,1553 DEBUG Client fs/client/fileclient/cc/client.cc:6659 Thread: 4544 PutBuffer memory threshold = 33554432 MB, flush interval = 3 secs, bufferSize =  131072 bytes
2016-11-25 16:50:15,1554 DEBUG Client fs/client/fileclient/cc/client.cc:6670 Thread: 4544 BulkLoader queueSz= 0MB flags=0
2016-11-25 16:50:15,1555 DEBUG Client fs/client/fileclient/cc/client.cc:1040 Thread: 4544 InitCreds: number of groups = 1
2016-11-25 16:50:15,1555 DEBUG Client fs/client/fileclient/cc/client.cc:1078 Thread: 4544 InitCreds: default user ID = 0
2016-11-25 16:50:15,1555 DEBUG Client fs/client/fileclient/cc/client.cc:1084 Thread: 4544 Added gid 0
2016-11-25 16:50:15,1708 DEBUG Client fs/client/fileclient/cc/client.cc:4487 Thread: 4544 Readdirplus: name = hbase
2016-11-25 16:50:15,1708 DEBUG Client fs/client/fileclient/cc/client.cc:4458 Thread: 4544 Readdirplus: Entry name "var" node 2049.35.131184 cookie 0xffffffffffffffff
2016-11-25 16:50:15,1708 DEBUG Client fs/client/fileclient/cc/client.cc:3464 Thread: 4544 mode = 493
2016-11-25 16:50:15,1708 DEBUG Client fs/client/fileclient/cc/client.cc:3469 Thread: 4544 uid = 2000
2016-11-25 16:50:15,1708 DEBUG Client fs/client/fileclient/cc/client.cc:3473 Thread: 4544 gid = 2000
2016-11-25 16:50:15,1711 DEBUG ApiCommon fs/client/fileclient/cc/api_common.cc:175 Thread: 4544 IdLookup: Found user: mapr, id: 2000
2016-11-25 16:50:15,1711 DEBUG ApiCommon fs/client/fileclient/cc/api_common.cc:175 Thread:Found 7 items
drwxr-xr-x   - mapr mapr          1 2016-11-14 14:34 /ameya
drwxr-xr-x   - mapr mapr          1 2016-11-10 10:51 /apps
drwxr-xr-x   - mapr mapr          0 2016-11-01 20:33 /hbase
drwxr-xr-x   - mapr mapr          0 2016-11-01 20:35 /opt
drwxrwxrwx   - mapr mapr          4 2016-11-25 13:40 /tmp
drwxr-xr-x   - mapr mapr          2 2016-11-14 14:33 /user
drwxr-xr-x   - mapr mapr          1 2016-11-01 20:34 /var
 4544 IdLookup: Found group: mapr, id: 2000
2016-11-25 16:50:15,1712 DEBUG JniCommon fs/client/fileclient/cc/jni_MapRClient.cc:439 Thread: 4544 Created JNIFileStatus obj


Given the operation being performed the output with this option added is likely to be very verbose.  The output can be redirected to a file to review later using the following command, This will output the full debug log to a file 'debug.out' in /tmp. 

# hadoop fs -Dfs.mapr.trace=debug -ls / >> /tmp/debug.out 2>&1

Same way while running job from command line file client debug logging can be turned on as below

 yarn jar /opt/mapr/hadoop/hadoop-2.7.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.0-mapr-1607.jar pi -Dfs.mapr.trace=DEBUG 1 2

Number of Maps  = 1
Samples per Map = 2
2016-11-25 17:03:12,9225 DEBUG JniCommon fs/client/fileclient/cc/jni_MapRClient.cc:537 Thread: 16349  -- Enter JNI OpenClient -- 
2016-11-25 17:03:12,9234 DEBUG Client fs/client/fileclient/cc/client.cc:6571 Thread: 16349 shmemsize: 2560 raSize: 0
2016-11-25 17:03:12,9304 DEBUG Client fs/client/fileclient/cc/client.cc:6588 Thread: 16349 connect timeout value : 0

2016-11-25 17:03:12,9304 DEBUG Client fs/client/fileclient/cc/client.cc:6602 Thread: 16349 User buffersize = 1024
2016-11-25 17:03:12,9304 DEBUG Client fs/client/fileclient/cc/client.cc:6610 Thread: 16349 Group buffersize = 1024
2016-11-25 17:03:12,9309 DEBUG Client fs/client/fileclient/cc/client.cc:6659 Thread: 16349 PutBuffer memory threshold = 33554432 MB, flush interval = 3 secs, bufferSize =  131072 bytes
2016-11-25 17:03:12,9309 DEBUG Client fs/client/fileclient/cc/client.cc:6670 Thread: 16349 BulkLoader queueSz= 0MB flags=0
2016-11-25 17:03:12,9310 DEBUG Client fs/client/fileclient/cc/client.cc:1040 Thread: 16349 InitCreds: number of groups = 2
2016-11-25 17:03:12,9310 DEBUG Client fs/client/fileclient/cc/client.cc:1078 Thread: 16349 InitCreds: default user ID = 2000
2016-11-25 17:03:12,9311 DEBUG Client fs/client/fileclient/cc/client.cc:1084 Thread: 16349 Added gid 2000
2016-11-25 17:03:13,6828 DEBUG Client fs/client/fileclient/cc/client.cc:2162 Thread: 16349 SetThreadLocalDefaultUserInfo: setting default user ID of 2000
2016-11-25 17:03:13,6828 DEBUG Client fs/client/fileclient/cc/client.cc:2153 Thread: 16349 SetThreadLocalUserInfo: set user ID to 2000 in thread memory at 0x7fda70bbb5e0
2016-11-25 17:03:13,6828 DEBUG JniCommon fs/client/fileclient/cc/jni_MapRClient.cc:649 Thread: 16349 parent = /var/mapr/cluster/yarn/rm/staging/mapr/.staging/job_14789077359, end = /job.jar
2016-11-25 17:03:13,6828 DEBUG Client fs/client/fileclient/cc/client.cc:2819 Thread: 16349 >TraverseAndCreateDirs: parentName /var/mapr/cluster/yarn/rm/staging/mapr/.staging/job_14789077359, child (nil)
2016-11-25 17:03:13,6828 DEBUG Client fs/client/fileclient/cc/client.cc:2834 Thread: 16349 TraverseAndCreateDirs: Path /var/mapr/cluster/yarn/rm/staging/mapr/.staging/job_14789077359 fid:2114.40.393848 found in fidcache
2016-11-25 17:03:14,1723 DEBUG ApiCommon fs/client/fileclient/cc/api_common.cc:175 Thread: 16349 IdLookup: Found user: mapr, id: 2000
2016-11-25 17:03:14,1723 DEBUG ApiCommon fs/client/fileclient/cc/api_common.cc:175 Thread: 16349 IdLookup: Found group: mapr, id: 2000
2016-11-25 17:03:14,1723 DEBUG JniCommon fs/client/fileclient/cc/jni_MapRClient.cc:439 Thread: 16349 Created JNIFileStatus obj
16/11/25 17:03:14 INFO impl.YarnClientImpl: Submitted application application_1478907735945_0006
16/11/25 17:03:14 INFO mapreduce.Job: The url to track the job: http://node10.maprlab.local:8088/proxy/application_1478907735945_0006/
16/11/25 17:03:14 INFO mapreduce.Job: Running job: job_1478907735945_0006
16/11/25 17:03:22 INFO mapreduce.Job: Job job_1478907735945_0006 running in uber mode : false
16/11/25 17:03:22 INFO mapreduce.Job:  map 0% reduce 0%
16/11/25 17:03:28 INFO mapreduce.Job:  map 100% reduce 0%
16/11/25 17:03:33 INFO mapreduce.Job:  map 100% reduce 100%
16/11/25 17:03:33 INFO mapreduce.Job: Job job_1478907735945_0006 completed successfully
16/11/25 17:03:33 INFO mapreduce.Job: Counters: 46
File System Counters
FILE: Number of bytes read=0
FILE: Number of bytes written=193533
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
MAPRFS: Number of bytes read=336
MAPRFS: Number of bytes written=303
MAPRFS: Number of read operations=43
MAPRFS: Number of large read operations=0
MAPRFS: Number of write operations=59
Job Counters 
Launched map tasks=1
Launched reduce tasks=1
Data-local map tasks=1
Total time spent by all maps in occupied slots (ms)=3281
Total time spent by all reduces in occupied slots (ms)=9231
Total time spent by all map tasks (ms)=3281
Total time spent by all reduce tasks (ms)=3077
Total vcore-seconds taken by all map tasks=3281
Total vcore-seconds taken by all reduce tasks=3077
Total megabyte-seconds taken by all map tasks=3359744
Total megabyte-seconds taken by all reduce tasks=9452544
DISK_MILLIS_MAPS=1641
DISK_MILLIS_REDUCES=4092
Map-Reduce Framework
Map input records=1
Map output records=2
Map output bytes=18
Map output materialized bytes=0
Input split bytes=134
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=24
Reduce input records=2
Reduce output records=0
Spilled Records=4
Shuffled Maps =1
Failed Shuffles=0
Merged Map outputs=2
GC time elapsed (ms)=153
CPU time spent (ms)=1200
Physical memory (bytes) snapshot=890372096
Virtual memory (bytes) snapshot=5562601472
Total committed heap usage (bytes)=874512384
Shuffle Errors
IO_ERROR=0
File Input Format Counters 
Bytes Read=118
File Output Format Counters 
Bytes Written=97
Job Finished in 20.913 seconds
9  -- Exit createJNIFileStatus -- 
2016-11-25 17:03:33,9136 DEBUG JniCommon fs/client/fileclient/cc/jni_MapRClient.cc:1015 Thread: 16349  -- Enter JNI getattr -- /user/mapr/QuasiMonteCarlo_1480122192058_2027324396/out/reduce-
2016-11-25 17:03:33,9136 DEBUG ApiCommon fs/client/fileclient/cc/api_common.cc:1802 Thread: 16349 ApiCommonSetThreadUser: Obtaining the object info for UserInfo object 0x7fda7934b000
2016-11-25 17:03:33,9136 DEBUG Client fs/client/fileclient/cc/client.cc:2162 Thread: 16349 SetThreadLocalDefaultUserInfo: setting default user ID of 2000
2016-11-25 17:03:33,9136 DEBUG Client fs/client/fileclient/cc/client.cc:2153 Thread: 16349 SetThreadLocalUserInfo: set user ID to 2000 in thread memory at 0x7fda70bbb5e0
2016-11-25 17:03:33,9380 DEBUG Inode fs/client/fileclient/cc/inode.cc:641 Thread: 16349 itab:inval >2059.32.131264 back to lr
Estimated value of Pi is 4.00000000000000000000
u 0x7fda70eaf4b0
2016-11-25 17:03:33,9381 DEBUG JniCommon fs/client/fileclient/cc/jni_MapRClient.cc:973 Thread: 16349  -- Exit JNI remove -- /user/mapr/QuasiMonteCarlo_1480122192058_2027324396 

2.  Using core-site.xml

If the issue being observed when accessing MapR-FS from map-reduce tasks or any application where '-Dfs.mapr.trace=debug' option cannot be invoked directly the debug logging can be enabled in core-site.xml.  This method can also be used for debugging the command line so the '-Dfs.mapr.trace=debug' option is not needed on every invocation.  Depending on your Hadoop version core-site.xml will be under /opt/mapr/hadoop/hadoop-0.20.2/conf/ or /opt/mapr/hadoop/hadoop-*/etc/hadoop/core-site.xml.  Add the following property to core-site.xml and save the changes:


<property>
<name>fs.mapr.trace</name>
<value>debug</value>
</property>

Note that making this change in core-site.xml will affect every process that accesses MapR-FS using FileClient and will result in verbose debug output printed to the process stderr. 

3.  Programmatically within application code

If the issue being observed when accessing MapR-FS is from user application code the 'fs.mapr.trace' configuration can be added directly to the application configuration code.  Ex:


Configuration conf = new Configuration(); conf.set("fs.mapr.trace","debug"); 
This will enable FileClient debug logging when the application code is run and calls are made to access MapR-FS.

Thursday, November 24, 2016

Verify consistency of Volume

                                     Verify consistency of Volume


There might be issues while accessing files in file system you get I/O errors or unexpected errors. To validate consistency of the volume we can perform below steps.

1) Creating snapshot of a volume.

 maprcli volume snapshot create -snapshotname usersnapshot -volume users

2) List the snapshot to get the snapshotid for the volume we trying to check the consistency.

 maprcli volume snapshot list
cumulativeReclaimSizeMB  creationtime                  ownername  snapshotid  volumeSnapshotAces  snapshotname   volumeid  volumename  ownertype  volumepath  

0                        Thu Nov 24 23:32:50 IST 2016  mapr       256000051   ...                 usersnapshot   95706557  users       1          /user       

3) GFSCK (ReadOnly command) runs in 3 steps to check the consistency of the volume. Below are the phases and what is done in each phase .

i) Get container list from CLDB for the volume 
ii) Get inode from all containers of the volume
iii) Check container map ,fid map,Snap chain list for consistency

Finally full report is printed with list of inconsistencies if any.

Example Run.

 /opt/mapr/bin/gfsck snapshotid=256000051

parseint snapshotid=256000051
Starting GlobalFsck:
  clear-mode = false
  debug-mode = false
  dbcheck-mode = false
  repair-mode = false
  assume-yes-mode = false
  cluster = mapr5.1
  rw-volume-name = null
  snapshot-name = null
  snapshot-id = 256000051
  user-id = 0
  group-id = 0

  get snapshot properties ...
  get volume properties ...

  starting phase one (get containers) for volume usersnapshot(256000051) ...
    got snapshot containers map
  done phase one

  starting phase two (get inodes) for volume usersnapshot(256000051) ...
    cid translation list is verified for 256000061
    got container inode lists
  done phase two

  starting phase three (get fidmaps & tabletmaps) for volume usersnapshot(256000051) ...
    got fidmap lists
    got tabletmap lists
    Starting DeferMapCheck..
    completed DeferMapCheck
  done phase three

  === Start of GlobalFsck Report ===

  file-fidmap-filelet union --
    no errors

  table-tabletmap-tablet union --
    empty

  Dangling DB pointers --
    none


  orphan directories --
    none

  orphan kvstores --
    none

  orphan files --
    none

  orphan fidmaps --
    none

  orphan tables --
    none

  orphan tabletmaps --
    none

  orphan dbkvstores --
    none

  orphan dbfiles --
    none

  orphan dbinodes --
    none

  containers that need repair --
    none

  incomplete snapshots that need to be deleted --
    none

  user statistics --
    containers = 6
    directories = 25
    kvstores = 0
    xattrKvStore = 0
    xattrInline = 0
    files = 1002
    fidmaps = 0
    filelets = 0
    tables = 0
    tabletmaps = 0
    schemas = 0
    tablets = 0
    segmaps = 0
    spillmaps = 0
    overflowfiles = 0
    bucketfiles = 0
    spillfiles = 0
    defermaps = 0

  === End of GlobalFsck Report ===

GlobalFsck completed successfully (515 ms); Result: verify succeeded