Friday, March 31, 2017

Monitoring MapR Secured cluster.

                           Monitoring MapR Secured cluster.


 Monitoring infra structure is single most important thing which is needed in cluster to watch out for any abnormalities and pro-actively fix before the problem aggrevates to cause major issues.

This blog assumes you have 5.2 Secured mapr cluster installed and is up and running. Now follow below steps to setup Monitoring infra for the cluster.

1) Verify root and mapr user has valid ticket if not please create.

[root@node10 ~]# maprlogin password
[Password for user 'root' at cluster 'Destination': ] 
MapR credentials of user 'root' for cluster 'Destination' are written to '/tmp/maprticket_0'

[mapr@node10 root]$ maprlogin password
[Password for user 'mapr' at cluster 'Destination': ] 
MapR credentials of user 'mapr' for cluster 'Destination' are written to '/tmp/maprticket_2000'


2) Setup MEP repo compatible with Mapr cluster version you installed .

[root@node10 ~]# cat /etc/yum.repos.d/mapr_core.repo 
[MapR_Core]
name = MapR Core Components
baseurl = http://package.mapr.com/releases/v5.2.0/redhat
gpgcheck = 1
enabled = 1
protected = 1

[maprecosystem]
name = MapR Technologies
baseurl = http://package.mapr.com/releases/MEP/MEP-2.0/redhat/
enabled=1
gpgcheck=0
protect=1

3) Clean up the repo's and install below packages 

Metric Monitoring


yum clean all

i)  yum install mapr-collectd mapr-grafana mapr-opentsdb -y

Verify packages get installed correctly along with the dependencies.

Installed:

  mapr-collectd.x86_64 0:5.5.1.201612081135-1                 mapr-grafana.x86_64 0:3.1.1.201612081149-1                 mapr-opentsdb.noarch 0:2.2.1.201612081520-1                


Dependency Installed:
  mapr-asynchbase.noarch 0:1.7.0.201611162116-1                                                                                                                                      

Complete!

Note : I am demo'ing with single node cluster but in real scenarios you will have more nodes . So collectd can run on all the nodes while Opentsdb/grafana can run on one node which has very few services installed.




Log Monitoring Architecture


Verify packages get installed correctly

ii) yum install mapr-fluentd mapr-elasticsearch mapr-kibana -y

Installed:
  mapr-elasticsearch.noarch 0:2.3.3.201612081350-1               mapr-fluentd.x86_64 0:0.14.00.201612081135-1               mapr-kibana.x86_64 0:4.5.1.201612081226-1              

Complete!

Note : I am demo'ing with single node cluster but in real scenarios you will have more nodes . So collectd can run on all the nodes while Opentsdb/grafana can run on one node which has very few services installed.



4) Run configure.sh along with below options on every node for cluster to know about Elastic search and OpenTSDB node.

/opt/mapr/server/configure.sh -R -ES 10.10.70.110 -OT 10.10.70.110

Verify configuration goes smooth and no errors are thrown.

Configuring Hadoop-2.7.0 at /opt/mapr/hadoop/hadoop-2.7.0
Done configuring Hadoop
Node setup configuration:  cldb collectd elasticsearch fileserver fluentd gateway grafana hbinternal kibana opentsdb webserver zookeeper
Log can be found at:  /opt/mapr/logs/configure.log
Running config for package collectd
Running config for package opentsdb
17/03/31 15:59:32 INFO Configuration.deprecation: io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum
17/03/31 15:59:32 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 0 minutes, Emptier interval = 0 minutes.
Running config for package grafana
MAC verified OK
{"id":1,"message":"Datasource added"}Running config for package elasticsearch
Running config for package fluentd
Running config for package kibana
MAC verified OK



Now from from the MCS, select the Grafana/OpenTSDB/Kibana view


http://<IPaddressOfOpenTSDBNode>:4242


GRAFANA :



LINK                                                                                               Username/passwd

http://<IPaddressOfGrafanaNode>:3000            admin/admin

Below is sample CLDB Dashboard.



Below is sample Node Dashboard.






Note : If grafana is not displaying details it might be related to known bug in OpenTSDB older version fixed in version 1.3.0 ( https://maprdrill.atlassian.net/browse/SPYG-806 )

Below is workaround to fix the issue.

On all openTSDB nodes 
1) Edit /opt/mapr/opentsdb/*/etc/opentsdb/opentsdb.conf 

 Add: tsd.core.meta.enable_tsuid_tracking = true under the --------- CORE ---------- section. 

2) Restart all OpenTSDB services 
3) cd /opt/mapr/opentsdb/*/bin 
4) Run # ./tsdb <OpenTSDB pid> metasync 
5) Restart Grafana and navigate back to the Grafana dashboards


KIBANA :



http://<IPaddressOfKibanaNode>:5601


  1. When the Kibana page loads, it displays (Setting tab)  Configure an index pattern screen. Provide following values:
    Note: The Index contains time-based events option is selected by default and should remain selected.
    FieldValue
    Index name or patternmapr_monitoring-*
    Time-field@timestamp
  2. Click Create.

Now new Index pattern is created.




Now click on Discover Tab, it will display the logs indexed also it will give an option to search for patterns.




No comments:

Post a Comment