Troubleshooting  Cheatsheet

Troubleshooting Cheatsheet

This to create ousigner in local environment

keytool -importkeystore -srckeystore ousigner.p12 -destkeystore lmg-eis-prv.jks -srcstoretype PKCS12 -deststoretype jks -srckeypass npciupi -destalias 1 -srcalias 1 -destkeypass npciupi

To Execute SBC Standalone Code :

java -cp . Standalone <*.*.*.* > <port> ./REQ_1603105.txt

Certificate configuration :

Need to download the ssl cert import into cacert import into cacert configure all the private cert of fig and publick cert of eis need to replace jdk

echo -n |openssl s_client -connect *..co.in:443|sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' >|tee "\***.cer*"

Cacerts Location :

( Note : All are symbolic links )

/etc/alternatives/java_sdk_11_openjdk/lib/security

/etc/pki/java/cacerts

/etc/pki/ca-trust/extracted/java/cacerts

keytool -import -trustcacerts -keystore cacerts -storepass changeit -noprompt -alias pnb-epa-cert -file ***-ssl.crt

To set Java Home path

export JAVA_HOME=/etc/alternatives/java_sdk_11_openjdk/

export PATH=$JAVA_HOME/bin:$PATH"

How to start kafka

First to start zookeeper

nohup ./bin/zookeeper-server-start.sh -daemon ./config/zookeeper.properties &

Then server

nohup ./bin/kafka-server-start.sh -daemon ./config/server.properties &

( Note : While migrating from one server to other needs to change in some config properties value according to environment. )

For testing purpose send the message

Note : These two should be two different tab

./bin/kafka-console-consumer.sh --bootstrap-server *.*.*.175:9092 --topic TOPIC_NAME

./bin/kafka-console-producer.sh --broker-list *.*.*.175:9092 --topic TOPIC_NAME

--> enter some message

TCP Protocol

Layer 1: Physical Layer

Note : The indication of DOWN in the above output for the eth0 interface. This result means that Layer 1 isn’t coming up.

Layer 2: Data Link Layer

Description :

The data link layer is responsible for local network connectivity; essentially, the communication of frames between hosts on the same Layer 2 domain (commonly called a local area network). The most relevant Layer 2 protocol for most sysadmins is the Address Resolution Protocol (ARP), which maps Layer 3 IP addresses to Layer 2 Ethernet MAC addresses. When a host tries to contact another host on its local network (such as the default gateway), it likely has the other host’s IP address, but it doesn’t know the other host’s MAC address. ARP solves this issue and figures out the MAC address for us.

Note : That the gateway’s MAC address is populated

Step to troubleshoot Layer 2 issue : - ip neighbor show

Layer 3: Network /Internet Layer

Description :

Layer 3 involves working with IP addresses, which should be familiar to any sysadmin. IP addressing provides hosts with a way to reach other hosts that are outside of their local network (though we often use them on local networks as well)

Note : ens192 UP *.*.57.149/23 fe80::250:56ff:feaf:a4b4/64

Step to troubleshoot Layer 3 issue : - Ping - Traceroute - Ip -br address show - ip route show - nslookup - /etc/hosts

Layer 4: Transport Layer

Description :

The first thing that you may want to do is see what ports are listening on the localhost. The result can be useful if you can’t connect to a particular service on the machine, such as a web or SSH server. Another common issue occurs when a daemon or service won’t start because of something else listening on a port

Step to troubleshoot Layer 4 issue : - Ss -tunlp4 - Telnet - Nc

To capture TCP Dump

/usr/sbin/tcpdump -i any host **.\.29.191 and port 443 -w tcpdump_5pm.pcap

To install memcached in linux

Reference : https://www.tutorialspoint.com/memcached/memcached_environment.htm

Memcached -p 6005 -U 6005 -u user -d

Commands to check utilisation

vmstat -n -t dealy count

vmstat - S M 3 ( to check memory in MB )

IMP fields to check utilsation

free --> for memory

si --> for swap in

so --> for swap out

To check CPU utilisation

mpstat -P ALL

mpstat dealy count

sar -u 5

To check RAM and Core

To check RAM --> vmstat -s

To check core processor --> getconf NPROCESSORSONLN

To check Memory --> grep MemTotal /proc/meminfo

To check Hardware ID

/opt/ANSI_ISO_SDK_Linux/bin/gethwid

/home/vaibhavumale/ANSI_ISO_SDK_Linux/bin/gethwid

Task 1 : Partition is in unusable state

https://docs.oracle.com/database/121/SUTIL/GUID-C44AADF7-777D-4847-A5ED-75E36B40D0EB.htm#SUTIL1305

https://dba.stackexchange.com/questions/3754/ora-01502-index-or-partition-of-such-index-is-in-usable-state-problem

Task 2 :

Reference :

https://mkyong.com/jdbc/hikaripool-1-connection-is-not-available-request-timed-out-after-30002ms/

https://www.baeldung.com/hikaricp

Types of Volume

Which types of volume we are using basically tell how data stores outside container . Types of volume basically a types of driver.

Reference : https://kubernetes.io/docs/concepts/storage/volumes/

1. emptyDir : It will create a empty directory as long as pod is alive and container should write on it. It gives as confident that it will survice container restart .

Steps to do that :

- Volumes creation : we have to specify name of the volume and types of volumes . For types of volume refer above mentioned link - Volume binding : mountPath - This is container internal path where it should mounted . It depends on application.

2. hostPath :

Reference : https://kubernetes.io/docs/concepts/storage/volumes/#hostpath

Priviledge Container : https://kubernetes.io/docs/tasks/configure-pod-container/security-context/

Note : - It takes path from host machine and all the pod share the same volume - Drawbacks is that it will restrict to same Node but it is better than emptyDir() . - It basically a share path from host to container - Type is important metadata of hostPath , just check above link for reference For eg. DirectoryOrCreate : It tells directory is present if not then create one. - It will only work on minikube ( One Worker Node environment )

Reference For Window Host : https://stackoverflow.com/questions/71018631/kubernetes-on-docker-for-windows-persistent-volume-with-hostpath-gives-operatio

- Only pod on same Node have access to HostPath . All other pod can't access the Hostpath

3. CSI Volume Type ( Container storage Interface )

Note : - It allows us to use AWS Elastic Cloud storage - Driver solution of All the cloud provider

4. Nfs ( Network File System ) Note : - Don't have AWS built in system

5. Persistent Volume

Note :
- Volume are destroyed when a pod is removed ( HostPath ) - Sometime we requires Pod and Node independent Volumes and also we don't want things to get lost if we scale up and down - Persistent Volume solve above issues… They will persist - PVClaim basically build around Pod and Node independent . So PVClaim connect to PV standalone to request access. So that we can write on it as well - They don't store data on node as it is independent - Now will see HostPath using PVClaim

6. HostPath using PV

Note : - It will create PV detached from Node and Worker Node

Reference : Metadata

https://kubernetes.io/docs/concepts/storage/persistent-volumes/#capacity - Type of storage and notation - Metadata sub tag basically a key value pair - HostPath don't have ReadOnlyMany & ReadWriteMany accessMode type - Inshort Total 2 yml needs to create one for PV and other for PVClaim - PV needs to use PVClaim after that all the pod we have to configure PV - Kubernetes gives lots of flexibility we can claim PV not with just pv name also by resource name - Resource key is important , it has requests - Connection to POD from PVClaim 7. NFS Storage Class

Reference : https://kubernetes.io/docs/concepts/storage/storage-classes/#nfs

To start the services

nohup ./bin/zookeeper-server-start.sh ./config/zookeeper.properties &

nohup ./bin/kafka-server-start.sh -daemon ./config/server.properties &

nohup ./bin/kafka-server-start.sh ./config/userlimits.properties &

nohup ./bin/kafka-server-start.sh ./config/customerlimits.properties &

To stop the services

nohup ./bin/zookeeper-server-stop.sh ./config/zookeeper.properties &

nohup ./bin/kafka-server-stop.sh -daemon ./config/server.properties &

nohup ./bin/kafka-server-stop.sh ./configimits.properties &

nohup ./bin/kafka-server-stop.sh ./config/limits.properties &

Steps to start Limits

Thanks for reading :)