1)
2)
ref: https://www.cellstream.com/reference-reading/tipsandtricks/431-finding-text-strings-in-wireshark-captures
1)
2)
ref: https://www.cellstream.com/reference-reading/tipsandtricks/431-finding-text-strings-in-wireshark-captures
history of ip has been assign to your pc
Ada 2 command yg boleh senaraikan IP yg pc anda dapat dari DHCP.
1) sudo cat /var/log/syslog | grep -Ei 'dhcp' | grep ip_address
2) sudo journalctl | grep -Ei 'dhcp' | grep ip_address
$ apt-cache madison calc calc | 2.12.7.2-4 | http://10.108.201.140/ubuntu/mirror/archive.ubuntu.com/ubuntu focal/universe amd64 Packages calc | 2.10.18-dfsg-2build1 | http://archive.ubuntu.com/ubuntu xenial/multiverse amd64 Packages
500 http://cm.archive.ubuntu.com/ubuntu xenial/multiverse amd64 Packages
ref:
https://linuxopsys.com/topics/install-specific-version-package-apt
ls
cari file yg berakhir dengan Packages, cth:
dl.google.com_linux_chrome_deb_dists_stable_main_binary-amd64_Packages
Dalam file ini, tersenarai nama package yg terdapat dalam repo tersebut.
ref:
https://tecadmin.net/list-all-packages-available-in-a-repository-on-ubuntu/
1) start with
ceph health detail
2) OSD_FULL
Ceph prevent writing to a full OSD. By default 'full' is set to 0.95. Temporary solution, set full to 0.97
ceph osd set-full-ratio 0.96
ps: to get more info
1- ceph osd dump | grep full_ratio
2- ceph df
(ref https://docs.ceph.com/en/quincy/rados/operations/health-checks/)
3) POOL_TOO_FEW_PGS
list pools
ceph osd lspools
To allow the cluster to automatically adjust the number of PGs
ceph osd pool set <pool-name> pg_autoscale_mode on
ceph osd pool set <pool-name> pg_num <new-pg-num>
4) BLUESTORE_NO_PER_POOL_OMAPsystemctl stop ceph-osd@123 ceph-bluestore-tool repair --path /var/lib/ceph/osd/ceph-123 systemctl start ceph-osd@123
Prepare new disk to be Physical_Volume
pvcreate /dev/sdj
Add new disk to existing Volume_group
vgextend pve /dev/sdj
Extend size of for local-lvm(lvm thin)
lvextend -L+100G pve/data
Extend size of poolmetadata for local-lvm(lvm thin)
lvresize --poolmetadatasize +1GB pve/data
Proxmox LVM
1- pvs
2- pvdisplay
3- vgs
4- vgdisplay
5- lvs
6- lvdisplay
=======================================================================
pvs
/dev/sdh sdh_thinPool lvm2 a-- 1.09t 120.00m
/dev/sdi ceph-c52a84e1-dbb7-45ce-8f2b-7b164b113313 lvm2 a-- 1.09t 0
/dev/sdj pve lvm2 a-- 1.09t <1.01t
/dev/sdk3 pve lvm2 a-- 223.00g 0
pvdisplay
--- Physical volume ---
PV Name /dev/sdh
VG Name sdh_thinPool
PV Size 1.09 TiB / not usable <1.59 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 286160
Free PE 30
Allocated PE 286130
PV UUID NHORh3-YaHf-CssJ-IqqT-j3FM-vcpI-elSI4i
=============================================================================
vgs
ceph-c8f585b1-835b-4584-9b44-687543a5ef12 1 1 0 wz--n- 1.09t 0
pve 2 3 0 wz--n- <1.31t <1.01t
sdh_thinPool 1 5 0 wz--n- 1.09t 120.00m
vgdisplay
--- Volume group ---
VG Name pve
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 13
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 2
Max PV 0
Cur PV 2
Act PV 2
VG Size <1.31 TiB
PE Size 4.00 MiB
Total PE 343249
Alloc PE / Size 78594 / <307.01 GiB
Free PE / Size 264655 / <1.01 TiB
VG UUID ut9tSq-VL7P-abb4-YmEi-zAoV-w9DP-9Ci7l8
--- Volume group ---
VG Name sdh_thinPool
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 12
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 5
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size 1.09 TiB
PE Size 4.00 MiB
Total PE 286160
Alloc PE / Size 286130 / 1.09 TiB
Free PE / Size 30 / 120.00 MiB
VG UUID b2ZreT-EXdF-S8yQ-6DUA-1YnI-hZgh-AN2Dgz
==============================================================
lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi-aotz-- 1.11t 0.42 0.20
root pve -wi-ao---- 155.75g
swap pve -wi-ao---- 8.00g
vm-123-disk-0 pve Vwi-a-tz-- 20.00g data 24.21
lvdisplay
--- Logical volume ---
LV Name data
VG Name pve
LV UUID IUc7pT-Df9y-7pQt-vndT-zloU-YJxi-2IrXsh
LV Write Access read/write (activated read only)
LV Creation host, time proxmox, 2020-06-02 07:51:03 +0800
LV Pool metadata data_tmeta
LV Pool data data_tdata
LV Status available
# open 0
LV Size 140.39 GiB
Allocated pool data 0.00%
Allocated metadata 1.14%
Current LE 35940
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:14
--- Logical volume ---
LV Name sdh_thinPool
VG Name sdh_thinPool
LV UUID UNmZHv-dwut-4tB4-5fQ1-Tn45-KuPx-pCa4Hv
LV Write Access read/write (activated read only)
LV Creation host, time prox1, 2022-07-07 10:45:34 +0800
LV Pool metadata sdh_thinPool_tmeta
LV Pool data sdh_thinPool_tdata
LV Status available
# open 0
LV Size <1.07 TiB
Allocated pool data 0.73%
Allocated metadata 0.21%
Current LE 280406
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:16
=====================================================================
lsblk
sdk 8:160 0 223.5G 0 disk
|-sdk1 8:161 0 1007K 0 part
|-sdk2 8:162 0 512M 0 part /boot/efi
`-sdk3 8:163 0 223G 0 part
|-pve-swap 253:7 0 8G 0 lvm [SWAP]
|-pve-root 253:8 0 155.8G 0 lvm /
|-pve-data_tmeta 253:9 0 11.4G 0 lvm
| `-pve-data-tpool 253:11 0 1.1T 0 lvm
| |-pve-data 253:12 0 1.1T 1 lvm
| `-pve-vm--123--disk--0 253:13 0 20G 0 lvm
`-pve-data_tdata 253:10 0 1.1T 0 lvm
`-pve-data-tpool 253:11 0 1.1T 0 lvm
|-pve-data 253:12 0 1.1T 1 lvm
`-pve-vm--123--disk--0 253:13 0 20G 0 lvm
Acquire::http::Proxy "socks5h://127.0.0.1:1080";
# SOCKS4 proxy creation
ssh -D9999 <remote host URI>
# use pip with the previously created proxy connection (requires pysocks)
python3 -m pip install <package name> --proxy socks5:localhost:9999
ref: https://stackoverflow.com/questions/22915705/how-to-use-pip-with-socks-proxy
#Notes:
Untuk abaikan masalah certificate verifikasi bagi pip, boleh tambah --truster-host
pip install --trusted-host pypi.org --trusted-host pypi.python.org --trusted-host files.pythonhosted.org <package_name>
doc['field'] are enable by default on all fields EXCEPT text fields.
doc['field.keyword']
- populate not while ingesting, but a bit later
- by default only populate if length < 256
Certificate:
openssl s_client -connect <host>:<port> < /dev/null 2>/dev/null
Fingerprint of certificate:
openssl s_client -connect <host>:<port> < /dev/null 2>/dev/null | openssl x509 -fingerprint -noout -in /dev/stdin
xpack.security.http.ssl.key: certs/privkey.pem
xpack.security.http.ssl.certificate: certs/fullchain.pem
xpack.security.http.ssl.certificate_authorities: [ "certs/chain.pem" ]
https://github.com/CollectionBuilder/collectionbuilder-sa_draft/issues/37
setup the initial master mode:
https://towardsdev.com/elasticsearch-8-express-installation-guide-6065d89141d8
for other node, need to copy the certificate from the master node. otherwise they cannot join the cluster.
##
There are http.ssl and transportation.ssl
http.ssl is for client(kibana, logstash, or your custome code) to connect to elastic. Ports 9200
transportation.ssl is for the node in the cluster to communicate among themself . Ports 9300.
Elastic suggest to use different ca and certs for http
1) use kibana_system
2) reset password on docker
docker exec -it container_name /usr/share/elasticsearch/bin/elasticsearch-reset-password -u kibana_system
GET /_cat/nodes - get list of nodes
Get / - get version of the node
GET /_cluster/health?pretty
- get cluster status: green/red
- number of nodes
- number of primary shards
https://www.elastic.co/virtual-events/unlock-your-soc-stop-threats-with-limitless-xdr
Detect - alert-rules
Video minute 15:20
Cases:
minute : 27
External Incident Management System: (eg: JIRA)
minute: 28:31
Dlm docker-compose.yml:
services: elasticsearch: environment: - ELASTIC_PASSWORD=$ELASTIC_PASSWORD
Dlm file .env:
ELASTIC_PASSWORD=changeme
https://discuss.elastic.co/t/set-password-and-user-with-docker-compose/225075
elastic on docker use docker-compose
in docker-compose.yml:
services:
es31:
image: docker.elastic.co/elasticsearch/elasticsearch:${VERSION}
in .env file(on same dir)
VERSION=7.7.0
When new release exist,
1- just change the version in file .env
2- stop the docker
3- start the docker again (docker-compose up es31)
Elasticsearch engine will handle all the process to upgrade. If you has cluster with multiple node, repeat steps above with all the non-master node. The master node should be the last one.
Do your own virtual network container
*marvelous
https://iximiuz.com/en/posts/container-networking-is-simple/
[string] encode() >> [bytes]
[bytes] hex() >> [hex_string]
[hex_string] fromhex() >> [bytes]
[bytes] decode() >> [string]
1) bytes dan encode()
k = bytes("4d", "utf-8")
setara dengan
j = "4d".encode()
>> k = j = b"4d"
2) hex()
k = bytes("a4d", "utf-8")
> k = b'a4d'
m = k.hex()
> m = '613464'
3) fromhex() dan decode()
n = bytes.fromhex(m)
> n = b'a4d'
p = n.decode("utf-8")
> p = 'a4d'
UTF Notes:
encoded
j = "4d".encode("utf-8")
>> j = b'4d'
j = "4d".encode("utf-16")
>> j = b'\xff\xfe4\x00d\x00'