高級打雜的每一天
總網頁檢視次數
星期二, 3月 23, 2021
How to disable weak ssh algorithms
For server side,
1. vi /etc/ssh/sshd_config
2. add the following lines
Ciphers aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes128-ctr
MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-512,hmac-sha2-256,hmac-ripemd160,hmac-sha1
KexAlgorithms diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1
3. systemctl restart sshd
For client side,
1. vi /etc/ssh/ssh_config
2. add the following lines
Ciphers aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes128-ctr
MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-512,hmac-sha2-256,hmac-ripemd160,hmac-sha1
KexAlgorithms diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1
3. systemctl restart sshd
To verify the setting
1. sshd -T |grep macs
2. sshd -T |grep kexalgorithms
3. sshd -T |grep ciphers
To verify client setting
1. ssh -Q kex
星期四, 1月 28, 2021
How to export Windows 10 stored wifi password
1. Open command prompt in Admin mode
2. netsh wlan show profile
3. netsh wlan export profile folder=c:\ key=clear
4. All wifi profiles are exported in xml format in C:\
5. Open then one by one and you can see the wifi password there
星期五, 9月 04, 2020
How to create a .pem File for SSL Certificate Installations ( digicert )
Creating a .pem File for SSL Certificate Installations
.pem SSL Creation Instructions
SSL .pem files (concatenated certificate container files), are frequently required for certificate installations when multiple certificates are being imported as one file.
This article contains multiple sets of instructions that walk through various .pem file creation scenarios.
Creating a .pem with the Entire SSL Certificate Trust Chain
Log into your DigiCert Management Console and download your Intermediate (DigiCertCA.crt), Root (TrustedRoot.crt), and Primary Certificates (your_domain_name.crt).
Open a text editor (such as wordpad) and paste the entire body of each certificate into one text file in the following order:
The Primary Certificate - your_domain_name.crt
The Intermediate Certificate - DigiCertCA.crt
The Root Certificate - TrustedRoot.crt
Make sure to include the beginning and end tags on each certificate. The result should look like this:
-----BEGIN CERTIFICATE-----
(Your Primary SSL certificate: your_domain_name.crt)
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
(Your Intermediate certificate: DigiCertCA.crt)
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
(Your Root certificate: TrustedRoot.crt)
-----END CERTIFICATE-----
Save the combined file as your_domain_name.pem. The .pem file is now ready to use.
Creating a .pem with the Server and Intermediate Certificates
Log into your DigiCert Management Console and download your Intermediate (DigiCertCA.crt) and Primary Certificates (your_domain_name.crt).
Open a text editor (such as wordpad) and paste the entire body of each certificate into one text file in the following order:
The Primary Certificate - your_domain_name.crt
The Intermediate Certificate - DigiCertCA.crt
Make sure to include the beginning and end tags on each certificate. The result should look like this:
-----BEGIN CERTIFICATE-----
(Your Primary SSL certificate: your_domain_name.crt)
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
(Your Intermediate certificate: DigiCertCA.crt)
-----END CERTIFICATE-----
Save the combined file as your_domain_name.pem. The .pem file is now ready to use.
Creating a .pem with the Private Key and Entire Trust Chain
Log into your DigiCert Management Console and download your Intermediate (DigiCertCA.crt) and Primary Certificates (your_domain_name.crt).
Open a text editor (such as wordpad) and paste the entire body of each certificate into one text file in the following order:
The Private Key - your_domain_name.key
The Primary Certificate - your_domain_name.crt
The Intermediate Certificate - DigiCertCA.crt
The Root Certificate - TrustedRoot.crt
Make sure to include the beginning and end tags on each certificate. The result should look like this:
-----BEGIN RSA PRIVATE KEY-----
(Your Private Key: your_domain_name.key)
-----END RSA PRIVATE KEY-----
-----BEGIN CERTIFICATE-----
(Your Primary SSL certificate: your_domain_name.crt)
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
(Your Intermediate certificate: DigiCertCA.crt)
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
(Your Root certificate: TrustedRoot.crt)
-----END CERTIFICATE-----
Save the combined file as your_domain_name.pem. The .pem file is now ready to use.
星期一, 8月 10, 2020
Syslog-ng Debug by logging to text file with formating
1. Write a new destination for file
destination d_file_debug {
file{
"/tmp/file_location/file_name"
template("text1: ${prs.source} ; text2 : ${prs.natip} \n")
}
}
2. Update log area and add the new destination to the place where you suspect the problem is ..
星期四, 7月 23, 2020
Password protecting your site with an .htaccess file
Overview
This article explains how to password protect your directory via SSH
by creating an .htaccess and .htpasswd file. The following steps are
covered in this article.
- Creating the .htpasswd file
- Creating the .htaccess file
- Code to protect a WordPress subdirectory
- Force SSL (HTTPS) on the login prompt
Using the panel to password protect your site
The easiest way to password protect your site is to use the tool in the DreamHost panel. Navigate to the Htaccess/WebDAV page. You can then set up password protection there.
No access to your .htaccess and .htpasswd files
However, please note that if you use the panel option, the .htaccess and .htpasswd files will be owned by the server. This means you will not be able to manually edit either of these files if you need to. Additionally, these instructions will overwrite any existing .htaccess file. Make sure to backup your existing .htaccess file before beginning these steps.
If you only need to password protect your site and will need access to your .htaccess and .htpasswd file at any time in the future, you should use the instructions in this article instead to manually create those files.
The easiest way to password protect your site is to use the tool in the DreamHost panel. Navigate to the Htaccess/WebDAV page. You can then set up password protection there.
No access to your .htaccess and .htpasswd files
However, please note that if you use the panel option, the .htaccess and .htpasswd files will be owned by the server. This means you will not be able to manually edit either of these files if you need to. Additionally, these instructions will overwrite any existing .htaccess file. Make sure to backup your existing .htaccess file before beginning these steps.
If you only need to password protect your site and will need access to your .htaccess and .htpasswd file at any time in the future, you should use the instructions in this article instead to manually create those files.
Creating the .htpasswd file
- Log into your server via SSH.
- Create an .htpasswd file in the directory you wish to password protect using the the htpasswd utility. For the first user, say user1, run the following:
[server]$ htpasswd -c /home/username/example.com/.htpasswd user1
- Enter the password for the user. This creates a password for a user
named 'user1'. The code in your .htpasswd file will show the encrypted
password like this:
user1:$apr1$bkS4zPQl$SyGLA9oP75L5uM5GHpe9A2
- Run it again (without the -c option) for any other users you wish to allow access to your directory.
- Set the permissions on this file to 644.
[server]$ chmod 644 .htpasswd
Creating the .htaccess file
Next, create an .htaccess file using the 'nano' editor:
Make sure to add this .htaccess file in the directory you wish to
password protect. For example, if you are password protecting your
entire site, it would go in your site's main directory:
- example.com
- example.com/members
[server]$ nano .htaccess
Code examples to add to the .htaccess file
Protect an entire directory
This example password protects an entire website directory. Make sure to change the lines in bold to your actual file path while changing to your username and domain name.#Protect Directory
AuthName "Dialog prompt"
AuthType Basic
AuthUserFile /home/username/example.com/.htpasswd
Require valid-user
Protect a single file
This example password protects a single file:#Protect single fileAuthName "Dialog prompt" AuthType Basic AuthUserFile /home/username/example.com/.htpasswd Require valid-user
Protect multiple files
This example protects multiple files:#Protect multiple filesAuthName "Dialog prompt" AuthType Basic AuthUserFile /home/username/example.com/.htpasswd Require valid-user
Code to protect a WordPress subdirectory
Due to how WordPress routes all page requests, attempting to access a password protected subdirectory will throw a 404 Not Found error. To resolve this, you must an extra line to the .htaccess file to reference ErrorDocument.This example protects a subdirectory named 'members'.
ErrorDocument 401 default #Protect Directory AuthName "Dialog prompt" AuthType Basic AuthUserFile /home/username/example.com/members/.htpasswd Require valid-user
Force SSL (HTTPS) on the login prompt
By default, the login prompt you see is not encrypted. This means
your password will be sent as plain text over http. In order to encrypt
this login, you must add an SSL certificate to your domain. Once added, add the code below to force SSL when logging in.
This method prevents submission of an .htaccess password prompt on an unencrypted connection. If you wish to ensure that your server is only serving documents over an encrypted SSL channel, then you must use the SSLRequireSSL directive with the +StrictRequire Option enabled:
>
AuthType none
This method prevents submission of an .htaccess password prompt on an unencrypted connection. If you wish to ensure that your server is only serving documents over an encrypted SSL channel, then you must use the SSLRequireSSL directive with the +StrictRequire Option enabled:
Step 1 — Adding code to your .htaccess file
Make sure the URL you enter next to SSLRequire is your site's base URL. Do not include 'www' in front of the URL if you're forcing 'www' to be removed in your panel.
If you're securing a subdirectory such as 'example.com/blog', this URL would still be 'example.com'.
Additionally, you can use any file you like for your 403 document. Below it is shown as 'error_redirect.php'. Change this to your chosen file.
If you're securing a subdirectory such as 'example.com/blog', this URL would still be 'example.com'.
Additionally, you can use any file you like for your 403 document. Below it is shown as 'error_redirect.php'. Change this to your chosen file.
SSLOptions +StrictRequire SSLRequireSSL SSLRequire %{HTTP_HOST} eq "example.com" ErrorDocument 403 /error_redirect.phperror_redirect.php
If you're only protecting a subdirectory
If you only want to protect a single subdirectory and not the whole site, specify the subdirectory in your .htaccess file as shown in the following code:#Protect Directory AuthName "Dialog prompt" AuthType Basic AuthUserFile /home/example_username/example.com/blog/.htpasswd Require valid-user SSLOptions +StrictRequire SSLRequireSSL SSLRequire %{HTTP_HOST} eq "www.example.com" ErrorDocument 403 /blog/error_redirect.phpAuthType none
If your site is on a server running Ubuntu 14 (Trusty), make sure to change the ErrorDocument line to the full URL path. For example:
ErrorDocument 403 https://example.com/blog/error_redirect.php
Step 2 — Add code to your error_redirect.php file
Now that your .htaccess will redirect to your error page, you must put some code into this error page to correctly redirect back to your secure login. Add the following PHP code.Issue with renewing a 'Let's Encrypt' certificate
The code may cause a 'Let's Encrypt' certificate to not renew properly. If you have added a 'Let's Encrypt' certificate to your domain, make sure to disable the code below in your .htaccess file when your certificate is about to renew. Once renewed, you can re-enable the code below.Recover crash ec2 instance by attaching it to new instance
1. attach the old volume to new instance in console
2. in new instance, mount the the old volume to a folder ..
2. in new instance, mount the the old volume to a folder ..
If you can not mount your XFS partition with
classical wrong fs type, bad superblock etc. error and you see a message
in kernel logs (dmesg) like that:
XFS: Filesystem sdb7 has duplicate UUID - can't mount
you can still mount the filesystem with nouuid options as below:mount -o nouuid /dev/sdb7 disk-7
But every mount, you have to provide nouuid
option. So, for exact solution you have to generate a new UUID for this partition with xfs_admin
utility:xfs_admin -U generate /dev/sdb7
Clearing log and setting UUID
writing all SBs
new UUID = 01fbb5f2-1ee0-4cce-94fc-024efb3cd3a4
after that, you can mount this XFS partition regularly.星期三, 7月 15, 2020
How to send meesage to slack channel
curl -X POST
-H 'Content-type: application/json'
--data '{"text":"Allow me to reintroduce myself!"}' YOUR_WEBHOOK_URL
curl -X POST
-H
--slient
--data-urlencode
"payload={\"text\": \" Hello $(sudo -u username /home/hello/tryme.sh)\"}"
YOUR_WEBHOOK_URL
星期三, 5月 27, 2020
How to install Sectigo wildcard certificate on Nginx
https://sectigo.com/resource-library/install-certificates-nginx-webserver
https://support.sectigo.com/Com_KnowledgeDetailPage?Id=kA01N000000zFKz
(How do I make my own bundle file from CRT files?)
https://support.sectigo.com/Com_KnowledgeDetailPage?Id=kA03l00000117LT
(Certificate Chain Diagram)
1. Refer to the above link to create the ca-bundle.crt
#cat SectigoRSAOrganizationValidationSecureServerCA.crt USERTrustRSACertificationAuthority.crt USERTrustRSAAAACertificateServerice.crt > domain.ca-bundle
2. Create the server.crt
#cat STAR_domain.crt domain.ca-bundle > server.crt
3. Prepare the server.key
#cp wildcard.domain.key server.key
4. Replace /etc/nginx/ssl/server.crt and server.key by the above two file
5. Restart the nginx server
#systemctl restart nginx
Verify the certificate expiration date.
server.crt format 1. domain cert 2. rsa cert 3. root cert 4. cross-sign cert
https://support.sectigo.com/Com_KnowledgeDetailPage?Id=kA01N000000zFKz
(How do I make my own bundle file from CRT files?)
https://support.sectigo.com/Com_KnowledgeDetailPage?Id=kA03l00000117LT
(Certificate Chain Diagram)
1. Refer to the above link to create the ca-bundle.crt
#cat SectigoRSAOrganizationValidationSecureServerCA.crt USERTrustRSACertificationAuthority.crt USERTrustRSAAAACertificateServerice.crt > domain.ca-bundle
2. Create the server.crt
#cat STAR_domain.crt domain.ca-bundle > server.crt
3. Prepare the server.key
#cp wildcard.domain.key server.key
4. Replace /etc/nginx/ssl/server.crt and server.key by the above two file
5. Restart the nginx server
#systemctl restart nginx
Verify the certificate expiration date.
server.crt format 1. domain cert 2. rsa cert 3. root cert 4. cross-sign cert
星期一, 3月 30, 2020
How to install globalsign cert for Nginx
Install Certificate - Nginx
Introduction
This article will walk you through installing a Certificate in Nginx. If this is not the solution you are looking for, please search for your solution in the search bar above.Guidelines
You can watch the video below for a tutotial.Or, you can check the step by step guidelines below.
To install a certificate in Nginx, a `Certificate Bundle` must be created. To accomplish this, each certificate (SSL Cert, Intermediate Cert, and Root Cert) must be in the PEM format.
- Open each certificate in a plain text editor.
- Create a new document in a plain text editor.
- Copy and paste the contents of each certificate into the new file.
The order should be:- Your GlobalSign SSL Certificate
- GlobalSign Intermediate Certificate
- GlobalSign Root Certificate
- Your completed file should be in this format:
-----BEGIN CERTIFICATE----- #Your GlobalSign SSL Certificate# -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- #GlobalSign Intermediate Certificate# -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- #GlobalSign Root Certificate# -----END CERTIFICATE-----
- Save this `Certificate Bundle` as a .crt
- Upload the Certificate Bundle & private key to a directory on the Nginx server.
- Edit the Nginx virtual hosts file.
Open the Nginx virtual host file for the website you are securing.
If you need your site to be accessible through both secure (https) and non-secure (http) connections, you will need a server module for each type of connection.
Make a copy of the existing non-secure server module and paste it below the original.
Add the lines shown below:server{ listen 443; ssl on; ssl_certificate /etc/ssl/your_domain.crt; ssl_certificate_key /etc/ssl/your_domain.key; server_name your.domain.com; access_log /var/log/nginx/nginx.vhost.access.log; error_log /var/log/nginx/nginx.vhost.error.log; location / { root /home/www/public_html/your.domain.com/public/; index index.html; } }
- Very Important – Make sure you adjust the file names to match your certificate files:
- ssl_certificate should be your primary certificate combined with the root & intermediate certificate bundle that you made in the previous step (e.g. your_domain.crt).
- ssl_certificate_key should be the key file generated when you created the CSR.
- Restart Nginx:
sudo /etc/init.d/nginx restart
星期五, 3月 27, 2020
How to use sed ...
Sed Command Examples
file.txt
unix is great os. unix is opensource. unix is free os.
learn operating system.
unixlinux which one you choose.
1. Replacing or substituting string
Sed command is mostly used to replace the text in a file. The below simple sed command replaces the word "unix" with "linux" in the file.
>sed 's/unix/linux/' file.txt
Here the "s" specifies the substitution operation. The "/" are delimiters. The "unix" is the search pattern and the "linux" is the replacement string.
By default, the sed command replaces the first occurrence of the pattern in each line and it won't replace the second, third...occurrence in the line.
2. Replacing the nth occurrence of a pattern in a line.
Use the /1, /2 etc flags to replace the first, second occurrence of a pattern in a line. The below command replaces the second occurrence of the word "unix" with "linux" in a line.
>sed 's/unix/linux/2' file.txt
3. Replacing all the occurrence of the pattern in a line.
The substitute flag /g (global replacement) specifies the sed command to replace all the occurrences of the string in the line.
>sed 's/unix/linux/g' file.txt
4. Replacing from nth occurrence to all occurrences in a line.
Use the combination of /1, /2 etc and /g to replace all the patterns from the nth occurrence of a pattern in a line. The following sed command replaces the third, fourth, fifth... "unix" word with "linux" word in a line.
>sed 's/unix/linux/3g' file.txt
5. Changing the slash (/) delimiter
You can use any delimiter other than the slash. As an example if you want to change the web url to another url as
>sed 's/http:\/\//www/' file.txt
In this case the url consists the delimiter character which we used. In that case you have to escape the slash with backslash character, otherwise the substitution won't work.
Using too many backslashes makes the sed command look awkward. In this case we can change the delimiter to another character as shown in the below example.
>sed 's_http://_www_' file.txt
>sed 's|http://|www|' file.txt
6. Using & as the matched string
There might be cases where you want to search for the pattern and replace that pattern by adding some extra characters to it. In such cases & comes in handy. The & represents the matched string.
>sed 's/unix/{&}/' file.txt
{unix} is great os. unix is opensource. unix is free os.
learn operating system.
{unix}linux which one you choose.
>sed 's/unix/{&&}/' file.txt
7. Using \1,\2 and so on to \9
The first pair of parenthesis specified in the pattern represents the \1, the second represents the \2 and so on. The \1,\2 can be used in the replacement string to make changes to the source string. As an example, if you want to replace the word "unix" in a line with twice as the word like "unixunix" use the sed command as below.
>sed 's/\(unix\)/\1\1/' file.txt
The parenthesis needs to be escaped with the backslash character. Another example is if you want to switch the words "unixlinux" as "linuxunix", the sed command is
>sed 's/\(unix\)\(linux\)/\2\1/' file.txt
Another example is switching the first three characters in a line
>sed 's/^\(.\)\(.\)\(.\)/\3\2\1/' file.txt
8. Duplicating the replaced line with /p flag
The /p print flag prints the replaced line twice on the terminal. If a line does not have the search pattern and is not replaced, then the /p prints that line only once.
>sed 's/unix/linux/p' file.txt
9. Printing only the replaced lines
Use the -n option along with the /p print flag to display only the replaced lines. Here the -n option suppresses the duplicate rows generated by the /p flag and prints the replaced lines only one time.
>sed -n 's/unix/linux/p' file.txt
If you use -n alone without /p, then the sed does not print anything.
10. Running multiple sed commands.
You can run multiple sed commands by piping the output of one sed command as input to another sed command.
>sed 's/unix/linux/' file.txt| sed 's/os/system/'
Sed provides -e option to run multiple sed commands in a single sed command. The above output can be achieved in a single sed command as shown below.
>sed -e 's/unix/linux/' -e 's/os/system/' file.txt
11. Replacing string on a specific line number.
You can restrict the sed command to replace the string on a specific line number. An example is
>sed '3 s/unix/linux/' file.txt
The above sed command replaces the string only on the third line.
12. Replacing string on a range of lines.
You can specify a range of line numbers to the sed command for replacing a string.
>sed '1,3 s/unix/linux/' file.txt
Here the sed command replaces the lines with range from 1 to 3. Another example is
>sed '2,$ s/unix/linux/' file.txt
Here $ indicates the last line in the file. So the sed command replaces the text from second line to last line in the file.
13. Replace on a lines which matches a pattern.
You can specify a pattern to the sed command to match in a line. If the pattern match occurs, then only the sed command looks for the string to be replaced and if it finds, then the sed command replaces the string.
>sed '/linux/ s/unix/centos/' file.txt
Here the sed command first looks for the lines which has the pattern "linux" and then replaces the word "unix" with "centos".
14. Deleting lines.
You can delete the lines a file by specifying the line number or a range or numbers.
>sed '2 d' file.txt
>sed '5,$ d' file.txt
15. Duplicating lines
You can make the sed command to print each line of a file two times.
>sed 'p' file.txt
16. Sed as grep command
You can make sed command to work as similar to grep command.
>grep 'unix' file.txt
>sed -n '/unix/ p' file.txt
Here the sed command looks for the pattern "unix" in each line of a file and prints those lines that has the pattern.
You can also make the sed command to work as grep -v, just by using the reversing the sed with NOT (!).
>grep -v 'unix' file.txt
>sed -n '/unix/ !p' file.txt
The ! here inverts the pattern match.
17. Add a line after a match.
The sed command can add a new line after a pattern match is found. The "a" command to sed tells it to add a new line after a match is found.
>sed '/unix/ a "Add a new line"' file.txt
unix is great os. unix is opensource. unix is free os.
"Add a new line"
learn operating system.
unixlinux which one you choose.
"Add a new line"
18. Add a line before a match
The sed command can add a new line before a pattern match is found. The "i" command to sed tells it to add a new line before a match is found.
>sed '/unix/ i "Add a new line"' file.txt
"Add a new line"
unix is great os. unix is opensource. unix is free os.
learn operating system.
"Add a new line"
unixlinux which one you choose.
19. Change a line
The sed command can be used to replace an entire line with a new line. The "c" command to sed tells it to change the line.
>sed '/unix/ c "Change line"' file.txt
"Change line"
learn operating system.
"Change line"
20. Transform like tr command
The sed command can be used to convert the lower case letters to upper case letters by using the transform "y" option.
>sed 'y/ul/UL/' file.txt
Unix is great os. Unix is opensoUrce. Unix is free os.
Learn operating system.
UnixLinUx which one yoU choose.
Here the sed command transforms the alphabets "ul" into their uppercase format "UL"
星期三, 3月 11, 2020
How to Install WordPress in a Subdirectory
How to Install WordPress in a Subdirectory (Step by Step)
Do you want to install WordPress in a subdirectory? Installing WordPress in a subdirectory allows you to run multiple WordPress instances under the same domain or even a subdomain name. In this article, we will show you how to install WordPress in a subdirectory without affecting the parent domain name.
Subdomain vs Subdirectory? Which One is Better for SEO?
Normally, you would want to start a WordPress website on its own domain name (for example, wpbeginner.com). However, sometimes you may want to create additional websites on the same domain name.This can be done by either installing WordPress in a subdomain (http://newebsite.example.com) or as a subdirectory (http://example.com/newwebsite/).
One question that we get asked is which one is better for SEO?
Search engines treat subdomains differently from root domain names and assign them rankings as a totally different website.
On the other hand, sub-directories benefit from the domain authority of the root domain thus ranking higher in most cases.
An easier way to create separate WordPress sites in both subdomain or subdirectory is by installing WordPress multisite network.
However, if you want to keep two websites managed separately, then you can install different instances of WordPress.
That being said, let’s take a look at how to install WordPress in a subdirectory.
Step 1. Create a Subdirectory under The Root Domain Name
First you need to create a subdirectory or folder under your root domain name. This is where you will install WordPress files.Connect to your WordPress hosting account using a FTP client or File Manager in cPanel.
Once connected, go to the root folder of your website. Usually it is the /public_html/ folder. If you already have WordPress installed in the root folder, then you will see your WordPress files and folders there.
Next, you need to right click and select ‘Create new directory’ from the menu.
You need to be careful when choosing the name for your subdirectory. This will be part of your new WordPress site’s URL and what your users will type in their browsers to reach this website.
For example, if you name this directory travel-guides then your WordPress website’s address will be:
http://example.com/travel-guides/
Step 2. Upload WordPress Files
Your newly created subdirectory is empty at the moment. Let’s change that by uploading WordPress files.First you need to visit WordPress.org website and click on the download button.
Your browser will now download the zip file containing the latest WordPress software to your computer.
After downloading the file, you need to select and extract it. Mac users can double click the file to extract it and Windows users need to right click and then select ‘Extract All’.
After extracting the zip file, you will see ‘wordpress’ folder containing all the WordPress files.
Now let’s upload these files to your new subdirectory.
Connect to your website using a FTP client and go to the subdirectory you created in the first step.
In the local files panel of your FTP client, go to to the WordPress folder you just extracted.
Select all files in the WordPress folder and then upload them to your new subdirectory.
Step 3. Create New Database
WordPress stores all your content in a database. You need to create a new database to use with your new WordPress site installed in a subdirectory.First, you need to login to the cPanel dashboard of your WordPress hosting account. Click on ‘MySQL Databases’ under the databases section.
On the next screen, you need to provide a name for your new database and then click on ‘Create Database’ button to continue.
Your cPanel dashboard will now create the new MySQL database. In order to use this database you need to create a MySQL username.
Scroll down to MySQL Users section and provide a new username and password. Click on ‘Create User’ button to continue.
Next, you need to give this newly created user privileges to work on the database you created earlier.
Scroll down to ‘Add user to database’ section. Select your MySQL username and then select your newly created database.
Click on Add button to continue.
Cpanel will now grant the MySQL user full privileges on your newly created database.
Step 4. Install WordPress
Now that everything is in place, you can go ahead and install WordPress. Simply visit the directory you created earlier in a web browser by typing the URL like this:http://example.com/your-subdirectory-name/
This will bring up the WordPress installation wizard. First you need to select the language for your WordPress website and click on the continue button.
Next, you will be asked to provide your WordPress database name, database username, password, and host. Enter the database details and click on the submit button.
WordPress will now connect to your database and you will see a success message like this:
Click on ‘Run the install’ button to continue.
On the next screen, you will be asked to provide a title for your website and choose an admin username, password, and email address.
After entering your website details, click on ‘Run install’ button to continue.
WordPress will now set up your website and will show you a success message:
You can now go ahead and login to your new WordPress website installed in the subdirectory.
Step 5. Fix Permalinks
If you have a separate WordPress install in the root directory, then the .htaccess files of your subdirectory will cause conflict. This will result in 404 errors on your website.To solve this, you need to edit the .htaccess file in your subdirectory WordPress install. Replace the code inside your .htaccess file with the following code:
1
2
3
4
5
6
7
8
9
10
11
| # BEGIN WordPress
RewriteEngine On RewriteBase /your-subdirectory/ RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /your-subdirectory/index.php [L]
|
#
END
WordPress
We hope this article helped you install WordPress in a subdirectory. You may also want to see our ultimate step by step WordPress SEO guide for beginners.
星期二, 3月 10, 2020
How to use Awk to Find And Replace Fields Values
Awk Find And Replace Fields Values
foo bar 12,300.50
foo bar 2,300.50
abc xyz 1,22,300.50
foo bar 2,300.50
abc xyz 1,22,300.50
How do I replace all , from 3rd field using awk and pass output to bc -l in the following format to get sum of all numbers:
12300.50+2300.50+1,22,300.50
You can use gsub() funcion as follows. The syntax is:
12300.50+2300.50+1,22,300.50
You can use gsub() funcion as follows. The syntax is:
gsub("find", "replace")
gsub("find-regex", "replace")
gsub("find-regex", "replace", t)
gsub(r, s [, t])
gsub("find-regex", "replace")
gsub("find-regex", "replace", t)
gsub(r, s [, t])
From the awk man page:
For each substring matching the regular expression r in the string t, substitute the string s, and return the number of substitutions. If t is not supplied, use $0. An & in the replacement text is replaced with the text that was actually matched. Use \& to get a literal &.
You can also use the following syntax:
gensub(r, s, h [, t])
From the awk man page:
Search the target string t for matches of the regular expression r. If h is a string beginning with g or G, then replace all matches of r with s. Otherwise, h is a number indicating which match of r to replace. If t is not supplied, $0 is used instead. Within the replacement text s, the sequence \n, where n is a digit from 1 to 9, may be used to indicate just the text that matched the n’th parenthesized subexpression. The sequence \0 represents the entire matched text, as does the character &. Unlike sub() and gsub(), the modified string is returned as the result of the function, and the original target string is not changed.
Example
Create a data file cat /tmp/data.txt
foo bar 12,300.50 foo bar 2,300.50 abc xyz 1,22,300.50
Type the following awk command:
awk '{ gsub(",","",$3); print $3 }' /tmp/data.txt |
Sample outputs:
12300.50 2300.50 122300.50
You can pass the output to any command or calculate sum of the fields:
awk 'BEGIN{ sum=0} { gsub(",","",$3); sum += $3 } END{ printf "%.2f\n", sum}' /tmp/data.txt |
OR build the list and pass to the bc -l:
awk '{ x=gensub(",","","G",$3); printf x "+" } END{ print "0" }' /tmp/data.txt | bc -l |
星期四, 1月 16, 2020
How to use nmon to capture performance data
If you make use of Linux in your data center, then you have good
reason to be on the lookout for a simple-to-use monitoring tool that can
give you a quick rundown of what's going on with your server. Within
the realm of Linux, tools like this are plentiful. So where do you start
in your quest to find that perfect tool? For me, day-to-day monitoring
of Linux servers begins with Nigel's Monitor, aka nmon.
The nmon tool will,
using a simple ncurses interface, display the usage for CPU, memory,
network, disks, file system, NFS, top processes, resources, and power
micro-partition. What's best is that you get to choose what nmon
displays. And since it's text-based, you can secure shell into your
servers and get a quick glimpse from anywhere (as long as "anywhere" has
access to said server).
Let's install nmon and see how it is used.
If you're using a distribution that uses dnf (Redhat, Fedora, CentOS, etc.), the following steps will install nmon:
Figure A
Say you want to view information about disks. If you hit the d key on your keyboard, nmon will display real-time statistics about any and all attached disks (Figure B).
Figure B
Next we'll add network and memory to the mix by hitting the n key followed by the m key (on your keyboard). The resulting window will add those real-time statistics to the mix (Figure C).
Figure C
You can toggle any of the added statistics off by hitting the associated keyboard key (the same used to add). The tool also includes the ability to increase and decrease the speed of updates. By hitting the - key on your keyboard you will speed up the screen updates and, conversely, the + key will slow them down.
To quit nmon, hit the q key and you will be returned to your bash prompt.
The tool also includes the ability to capture information and save it to a file. This can come in very handy if you need to monitor a system for a set period of time and then review the collected data later. Say you want to collect thirty rounds of information every 60 seconds. To do this, you would issue the command:
After you have collected the .nmon file you can convert it to html by the tool called nmon-chart
Let's install nmon and see how it is used.
Installation
The nmon application can be installed from your distribution's standard repository. This means you should be able to install nmon without too much fuss. For a distribution that uses apt (Debian, Ubuntu, etc.), do the following:- Open up your terminal window
- Issue the command sudo apt-get update
- Install the software with the command sudo apt-get install nmon
- Allow the installation to complete
- Open up your terminal window
- Issue the command dnf install epel-release
- Install nmon with the command dnf install nmon
- Allow the installation to complete
Usage
Now that nmon is installed, you can fire it up by issuing the command nmon. In the nmon window (Figure A), you simply have to toggle the statistic(s) you want to view.Figure A
Say you want to view information about disks. If you hit the d key on your keyboard, nmon will display real-time statistics about any and all attached disks (Figure B).
Figure B
Next we'll add network and memory to the mix by hitting the n key followed by the m key (on your keyboard). The resulting window will add those real-time statistics to the mix (Figure C).
Figure C
You can toggle any of the added statistics off by hitting the associated keyboard key (the same used to add). The tool also includes the ability to increase and decrease the speed of updates. By hitting the - key on your keyboard you will speed up the screen updates and, conversely, the + key will slow them down.
To quit nmon, hit the q key and you will be returned to your bash prompt.
The tool also includes the ability to capture information and save it to a file. This can come in very handy if you need to monitor a system for a set period of time and then review the collected data later. Say you want to collect thirty rounds of information every 60 seconds. To do this, you would issue the command:
nmon -f -s 60 -c 30After issuing the above command you will find a file in the current working directory with the extension .nmon. Open that file to view the collected data.
Scheduling data collection
You could even create a cron job to schedule a regular dump of nmon-collected data (which could be handy for troubleshooting an recurring issue). A simple solution for this would be to create a bash script (we'll call it nmon.sh) that contained something like the following:#! /bin/sh nmon -f -s 60 -c 30Save that file and give it executable permissions with the command chmod u+x nmon.sh. Now open crontab for editing with the command crontab -e and enter something like this:
30 11 * * * ~/nmon.shSave and close crontab. The above cron job will run every day at 11:30 AM. Modify that to fit your needs and you have an easy solution for troubleshooting an issue occurring on your Linux data center machines.
After you have collected the .nmon file you can convert it to html by the tool called nmon-chart
Syntax:
- nmonchart
.html
For example:
- nmonchart blue_150508_0800.nmon blue_150508_0800.html
- nmonchart blue_150508_0800.nmon
- if you miss out the target filename is will use the source filename and replace .nmon with .html
Or you could put the .html straight on to your website (assuming Apache is using /var/www/html)
- nmonchart blue_150508_0800.nmon /var/www/html/blue_150508_0800.html
星期五, 12月 20, 2019
How to sftp files automatically
sshpass -p YOUR_PASSWORD sftp -oBatchMode=no -b YOUR_COMMAND_FILE_PATH USER@HOST
To do this safer put
sftp login@host < /path/to/command/list
To do this safer put
export SSHPASS='your_password'
to ~/.bashrc
and run with -e
flag. I have used this cmd in some project like this: echo 'ls -t upload/*.xml' | sshpass -e sftp -oBatchMode=no -b - user@example.com | grep -v "sftp>" | head -n1
星期五, 12月 13, 2019
How to rename multiple files with linux script
To rename all files starting with detail and replace it with test
for f in detail* ; do mv "${f}" "${f/detail/test}"; done
Before
detail-1.txt
detail-2.txt
After
test-1.txt
test-2.txt
for f in detail* ; do mv "${f}" "${f/detail/test}"; done
(for j in *.bak; do mv -v -- "$j" "${j%.bak}.txt"; done) |
Before
detail-1.txt
detail-2.txt
After
test-1.txt
test-2.txt
星期四, 11月 28, 2019
Syslog-ng Log Parser Options
Options of syslog-parser parsers
parser: Parse and segment structured messages > Parsing syslog messages > Options of syslog-parser parsers
The syslog-parser has the following options.
default-facility()
Type: | facility string |
Default: | kern |
Description: This parameter assigns a facility value to the messages received from the file source if the message does not specify one.
default-priority()
Type: | priority string |
Default: |
Description: This parameter assigns an emergency level to the messages received from the file source if the message does not specify one. For example, default-priority(warning).
flags()
Type: | assume-utf8, empty-lines, expect-hostname, kernel, no-hostname, no-multi-line, no-parse, sanitize-utf8, store-legacy-msghdr, syslog-protocol, validate-utf8 |
Default: | empty set |
Description: Specifies the log parsing options of the source.
- assume-utf8: The assume-utf8 flag assumes that the incoming messages are UTF-8 encoded, but does not verify the encoding. If you explicitly want to validate the UTF-8 encoding of the incoming message, use the validate-utf8 flag.
- empty-lines: Use the empty-lines flag to keep the empty lines of the messages. By default, syslog-ng OSE removes empty lines automatically.
- expect-hostname: If the expect-hostname flag is enabled, syslog-ng OSE will assume that the log message contains a hostname and parse the message accordingly. This is the default behavior for TCP sources. Note that pipe sources use the no-hostname flag by default.
- kernel: The kernel flag makes the source default to the LOG_KERN | LOG_NOTICE priority if not specified otherwise.
- no-hostname: Enable the no-hostname flag if the log message does not include the hostname of the sender host. That way, syslog-ng OSE assumes that the first part of the message header is ${PROGRAM} instead of ${HOST}. For example:
source s_dell { network(port(2000) flags(no-hostname)); };
- no-multi-line: The no-multi-line flag disables line-breaking in the messages: the entire message is converted to a single line. Note that this happens only if the underlying transport method actually supports multi-line messages. Currently the file() and pipe() drivers support multi-line messages.
- no-parse: By default, syslog-ng OSE parses incoming messages as syslog messages. The no-parse flag completely disables syslog message parsing and processes the complete line as the message part of a syslog message. The syslog-ng OSE application will generate a new syslog header (timestamp, host, and so on) automatically and put the entire incoming message into the MESSAGE part of the syslog message (available using the ${MESSAGE} macro). This flag is useful for parsing messages not complying to the syslog format.If you are using the flags(no-parse) option, then syslog message parsing is completely disabled, and the entire incoming message is treated as the ${MESSAGE} part of a syslog message. In this case, syslog-ng OSE generates a new syslog header (timestamp, host, and so on) automatically. Note that since flags(no-parse) disables message parsing, it interferes with other flags, for example, disables flags(no-multi-line).
- dont-store-legacy-msghdr: By default, syslog-ng stores the original incoming header of the log message. This is useful if the original format of a non-syslog-compliant message must be retained (syslog-ng automatically corrects minor header errors, for example, adds a whitespace before msg in the following message: Jan 22 10:06:11 host program:msg). If you do not want to store the original header of the message, enable the dont-store-legacy-msghdr flag.
- sanitize-utf8: When using the sanitize-utf8 flag, syslog-ng OSE converts non-UTF-8 input to an escaped form, which is valid UTF-8.
- store-raw-message: Save the original message as received from the client in the ${RAWMSG} macro. You can forward this raw message in its original form to another syslog-ng node using the syslog-ng() destination, or to a SIEM system, ensuring that the SIEM can process it. Available only in 3.16 and later.
- syslog-protocol: The syslog-protocol flag specifies that incoming messages are expected to be formatted according to the new IETF syslog protocol standard (RFC5424), but without the frame header. Note that this flag is not needed for the syslog driver, which handles only messages that have a frame header.
- validate-utf8: The validate-utf8 flag enables encoding-verification for messages formatted according to the new IETF syslog standard (for details, see IETF-syslog messages). If theBOM1character is missing, but the message is otherwise UTF-8 compliant, syslog-ng automatically adds the BOM character to the message.
template()
Synopsis: | template("${ |
Description: The macro that contains the part of the message that the parser will process. It can also be a macro created by a previous parser of the log path. By default, the parser processes the entire message (${MESSAGE}).
Was this topic helpful?
[Select Rating]
Parsing messages with comma-separated and similar values
parser: Parse and segment structured messages > Parsing messages with comma-separated and similar values
The syslog-ng OSE application can separate parts of log messages (that is, the contents of the ${MESSAGE} macro) at delimiter characters or strings to named fields (columns). One way to achieve this is to use a csv (comma-separated-values) parser (for other methods and possibilities, see the other sections of parser: Parse and segment structured messages. The parsed fields act as user-defined macros that can be referenced in message templates, file- and tablenames, and so on.
Parsers are similar to filters: they must be defined in the syslog-ng OSE configuration file and used in the log statement. You can also define the parser inline in the log path.
NOTE:
The order of filters, rewriting rules, and parsers in the log statement is important, as they are processed sequentially.
|
To create a csv-parser(), you have to define the columns of the message, the separator characters or strings (also called delimiters, for example, semicolon or tabulator), and optionally the characters that are used to escape the delimiter characters (quote-pairs()).
Declaration:
parser{ csv-parser( columns(column1, column2, ...) delimiters(chars(" "), strings(" ")) ); };
Column names work like macros.
Names starting with a dot (for example, .example) are reserved for use by syslog-ng OSE. If you use such a macro name as the name of a parsed value, it will attempt to replace the original value of the macro (note that only soft macros can be overwritten, see Hard vs. soft macros for details). To avoid such problems, use a prefix when naming the parsed values, for example, prefix(my-parsed-data.)
Example: Segmenting hostnames separated with a dash
The following example separates hostnames like example-1 and example-2 into two parts.
parser p_hostname_segmentation { csv-parser(columns("HOSTNAME.NAME", "HOSTNAME.ID") delimiters("-") flags(escape-none) template("${HOST}")); }; destination d_file { file("/var/log/messages-${HOSTNAME.NAME:-examplehost}"); }; log { source(s_local); parser(p_hostname_segmentation); destination(d_file); };
Example: Parsing Apache log files
The following parser processes the log of Apache web servers and separates them into different fields. Apache log messages can be formatted like:
"%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %T %v"
Here is a sample message:
192.168.1.1 - - [31/Dec/2007:00:17:10 +0100] "GET /cgi-bin/example.cgi HTTP/1.1" 200 2708 "-" "curl/7.15.5 (i4 86-pc-linux-gnu) libcurl/7.15.5 OpenSSL/0.9.8c zlib/1.2.3 libidn/0.6.5" 2 example.mycompany
To parse such logs, the delimiter character is set to a single whitespace (delimiters(" ")). Whitespaces between quotes and brackets are ignored (quote-pairs('""[]')).
parser p_apache { csv-parser( columns("APACHE.CLIENT_IP", "APACHE.IDENT_NAME", "APACHE.USER_NAME", "APACHE.TIMESTAMP", "APACHE.REQUEST_URL", "APACHE.REQUEST_STATUS", "APACHE.CONTENT_LENGTH", "APACHE.REFERER", "APACHE.USER_AGENT", "APACHE.PROCESS_TIME", "APACHE.SERVER_NAME") flags(escape-double-char,strip-whitespace) delimiters(" ") quote-pairs('""[]') ); };
The results can be used for example to separate log messages into different files based on the APACHE.USER_NAME field. If the field is empty, the nouser name is assigned.
log { source(s_local); parser(p_apache); destination(d_file); }; destination d_file { file("/var/log/messages-${APACHE.USER_NAME:-nouser}"); };
Example: Segmenting a part of a message
Multiple parsers can be used to split a part of an already parsed message into further segments. The following example splits the timestamp of a parsed Apache log message into separate fields.
parser p_apache_timestamp { csv-parser( columns("APACHE.TIMESTAMP.DAY", "APACHE.TIMESTAMP.MONTH", "APACHE.TIMESTAMP.YEAR", "APACHE.TIMESTAMP.HOUR", "APACHE.TIMESTAMP.MIN", "APACHE.TIMESTAMP.MIN", "APACHE.TIMESTAMP.ZONE") delimiters("/: ") flags(escape-none) template("${APACHE.TIMESTAMP}") ); }; log { source(s_local); parser(p_apache); parser(p_apache_timestamp); destination(d_file); };
Further examples:
- For an example on using the greedy option, see Example: Adding the end of the message to the last column.
Was this topic helpful?
[Select Rating]
Options of CSV parsers
parser: Parse and segment structured messages > Parsing messages with comma-separated and similar values > Options of CSV parsers
The syslog-ng OSE application can separate parts of log messages (that is, the contents of the ${MESSAGE} macro) at delimiter characters or strings to named fields (columns). One way to achieve this is to use a csv (comma-separated-values) parser (for other methods and possibilities, see the other sections of parser: Parse and segment structured messages. The parsed fields act as user-defined macros that can be referenced in message templates, file- and tablenames, and so on.
Parsers are similar to filters: they must be defined in the syslog-ng OSE configuration file and used in the log statement. You can also define the parser inline in the log path.
NOTE:
The order of filters, rewriting rules, and parsers in the log statement is important, as they are processed sequentially.
|
To create a csv-parser(), you have to define the columns of the message, the separator characters or strings (also called delimiters, for example, semicolon or tabulator), and optionally the characters that are used to escape the delimiter characters (quote-pairs()).
Declaration:
parser{ csv-parser( columns(column1, column2, ...) delimiters(chars(" "), strings(" ")) ); };
Column names work like macros.
Names starting with a dot (for example, .example) are reserved for use by syslog-ng OSE. If you use such a macro name as the name of a parsed value, it will attempt to replace the original value of the macro (note that only soft macros can be overwritten, see Hard vs. soft macros for details). To avoid such problems, use a prefix when naming the parsed values, for example, prefix(my-parsed-data.)
columns()
Synopsis: | columns("PARSER.COLUMN1", "PARSER.COLUMN2", ...) |
Description: Specifies the name of the columns to separate messages to. These names will be automatically available as macros. The values of these macros do not include the delimiters.
delimiters()
Synopsis: |
delimiters(chars("
delimiters(strings("
delimiters(chars("
|
Description: The delimiter is the character or string that separates the columns in the message. If you specify multiple characters using the delimiters(chars("")) option, every character will be treated as a delimiter. To separate the columns at the tabulator (tab character), specify \t. For example, to separate the text at every hyphen (-) and colon (:) character, use delimiters(chars("-:")), Note that the delimiters will not be included in the column values.
String delimiters:
If you have to use a string as a delimiter, list your string delimiters in the delimiters(strings("", "", ...)") format.
By default, syslog-ng OSE uses space as a delimiter. If you want to use only the strings as delimiters, you have to disable the space delimiter, for example: delimiters(chars(""), strings(""))
Multiple delimiters:
If you use more than one delimiter, note the following points:
- syslog-ng OSE will split the message at the nearest possible delimiter. The order of the delimiters in the configuration file does not matter.
- You can use both string delimiters and character delimiters in a parser.
- The string delimiters can include characters that are also used as character delimiters.
- If a string delimiter and a character delimiter both match at the same position of the message, syslog-ng OSE uses the string delimiter.
dialect()
Synopsis: | escape-none|escape-backslash|escape-double-char |
Description: Specifies how to handle escaping in the parsed message. The following values are available. Default value: escape-none
- escape-backslash: The parsed message uses the backslash (\) character to escape quote characters.
- escape-double-char: The parsed message repeats the quote character when the quote character is used literally. For example, to escape a comma (,), the message contains two commas (,,).
- escape-none: The parsed message does not use any escaping for using the quote character literally.
parser p_demo_parser { csv-parsercsv-parser( prefix(".csv.") delimiters(" ") dialect(escape-backslash) flags(strip-whitespace, greedy) columns("column1", "column2", "column3") ); };
flags()
Synopsis: | drop-invalid, escape-none, escape-backslash, escape-double-char, greedy, strip-whitespace |
Description: Specifies various options for parsing the message. The following flags are available:
- drop-invalid: When the drop-invalid option is set, the parser does not process messages that do not match the parser. For example, a message does not match the parser if it has less columns than specified in the parser, or it has more columns but the greedy flag is not enabled. Using the drop-invalid option practically turns the parser into a special filter, that matches messages that have the predefined number of columns (using the specified delimiters).
TIP: Messages dropped as invalid can be processed by a fallback log path. For details on the fallback option, see Log path flags. - escape-backslash: The parsed message uses the backslash (\) character to escape quote characters.
- escape-double-char: The parsed message repeats the quote character when the quote character is used literally. For example, to escape a comma (,), the message contains two commas (,,).
- escape-none: The parsed message does not use any escaping for using the quote character literally.
- greedy: The greedy option assigns the remainder of the message to the last column, regardless of the delimiter characters set. You can use this option to process messages where the number of columns varies.
Example: Adding the end of the message to the last column
If the greedy option is enabled, the syslog-ng application adds the not-yet-parsed part of the message to the last column, ignoring any delimiter characters that may appear in this part of the message.For example, you receive the following comma-separated message: example 1, example2, example3, and you segment it with the following parser:csv-parser(columns("COLUMN1", "COLUMN2", "COLUMN3") delimiters(","));
The COLUMN1, COLUMN2, and COLUMN3 variables will contain the strings example1, example2, and example3, respectively. If the message looks like example 1, example2, example3, some more information, then any text appearing after the third comma (that is, some more information) is not parsed, and possibly lost if you use only the variables to reconstruct the message (for example, to send it to different columns of an SQL table).Using the greedy flag will assign the remainder of the message to the last column, so that the COLUMN1, COLUMN2, and COLUMN3 variables will contain the strings example1, example2, and example3, some more information.csv-parser(columns("COLUMN1", "COLUMN2", "COLUMN3") delimiters(",") flags(greedy));
- strip-whitespace: The strip-whitespace flag removes leading and trailing whitespaces from all columns.
null()
Synopsis: | string |
Description: If the value of a column is the value of the null() parameter, syslog-ng OSE changes the value of the column to an empty string. For example, if the columns of the message contain the "N/A" string to represent empty values, you can use the null("N/A") option to change these values to empty stings.
prefix()
Synopsis: | prefix() |
Description: Insert a prefix before the name part of the parsed name-value pairs to help further processing. For example:
- To insert the my-parsed-data. prefix, use the prefix(my-parsed-data.) option.
- To refer to a particular data that has a prefix, use the prefix in the name of the macro, for example, ${my-parsed-data.name}.
- If you forward the parsed messages using the IETF-syslog protocol, you can insert all the parsed data into the SDATA part of the message using the prefix(.SDATA.my-parsed-data.) option.
Names starting with a dot (for example, .example) are reserved for use by syslog-ng OSE. If you use such a macro name as the name of a parsed value, it will attempt to replace the original value of the macro (note that only soft macros can be overwritten, see Hard vs. soft macros for details). To avoid such problems, use a prefix when naming the parsed values, for example, prefix(my-parsed-data.)
quote-pairs()
Synopsis: | quote-pairs(' |
Description: List quote-pairs between single quotes. Delimiter characters or strings enclosed between quote characters are ignored. Note that the beginning and ending quote character does not have to be identical, for example [} can also be a quote-pair. For an example of using quote-pairs() to parse Apache log files, see Example: Parsing Apache log files.
template()
Synopsis: | template("${ |
Description: The macro that contains the part of the message that the parser will process. It can also be a macro created by a previous parser of the log path. By default, the parser processes the entire message (${MESSAGE}).
For examples, see Example: Segmenting hostnames separated with a dash and Example: Segmenting a part of a message.
Was this topic helpful?
[Select Rating]
Parsing key=value pairs
The syslog-ng OSE application can separate a message consisting of whitespace or comma-separated key=value pairs (for example, Postfix log messages) into name-value pairs. You can also specify other separator character instead of the equal sign, for example, colon (:) to parse MySQL log messages. The syslog-ng OSE application automatically trims any leading or trailing whitespace characters from the keys and values, and also parses values that contain unquoted whitespace. For details on using value-pairs in syslog-ng OSE see Structuring macros, metadata, and other value-pairs.
You can refer to the separated parts of the message using the key of the value as a macro. For example, if the message contains KEY1=value1,KEY2=value2, you can refer to the values as ${KEY1} and ${KEY2}.
NOTE:
If a log message contains the same key multiple times (for example, key1=value1, key2=value2, key1=value3, key3=value4, key1=value5), then syslog-ng OSE stores only the last (rightmost) value for the key. Using the previous example, syslog-ng OSE will store the following pairs: key1=value5, key2=value2, key3=value4.
|
CAUTION:
If the names of keys in the message is the same as the names of syslog-ng OSE soft macros, the value from the parsed message will overwrite the value of the macro. For example, the PROGRAM=value1, MESSAGE=value2 content will overwrite the ${PROGRAM} and ${MESSAGE} macros. To avoid overwriting such macros, use the prefix() option.
Hard macros cannot be modified, so they will not be overwritten. For details on the macro types, see Hard vs. soft macros.
The parser discards message sections that are not key=value pairs, even if they appear between key=value pairs that can be parsed.
|
To parse key=value pairs, define a parser that has the kv-parser() option. Defining the prefix is optional. By default, the parser will process the ${MESSAGE} part of the log message. You can also define the parser inline in the log path.
Declaration:
parser parser_name { kv-parser( prefix() ); };
Example: Using a key=value parser
In the following example, the source is a log message consisting of comma-separated key=value pairs, for example, a Postfix log message:
Jun 20 12:05:12 mail.example.compostfix/qmgr[35789]: EC2AC1947DA: from= , size=807, nrcpt=1 (queue active)
The kv-parser inserts the ".kv." prefix before all extracted name-value pairs. The destination is a file, that uses the format-json template function. Every name-value pair that begins with a dot (".") character will be written to the file (dot-nv-pairs). The log line connects the source, the destination and the parser.
source s_kv { network(port(21514)); }; destination d_json { file("/tmp/test.json" template("$(format-json --scope dot-nv-pairs)\n")); }; parser p_kv { kv-parser (prefix(".kv.")); }; log { source(s_kv); parser(p_kv); destination(d_json); };
You can also define the parser inline in the log path.
source s_kv { network(port(21514)); }; destination d_json { file("/tmp/test.json" template("$(format-json --scope dot-nv-pairs)\n")); }; log { source(s_kv); parser { kv-parser (prefix(".kv.")); }; destination(d_json); };
You can set the separator character between the key and the value to parse for example key:value pairs, like MySQL logs:
Mar 7 12:39:25 myhost MysqlClient[20824]: SYSTEM_USER:'oscar', MYSQL_USER:'my_oscar', CONNECTION_ID:23, DB_SERVER:'127.0.0.1', DB:'--', QUERY:'USE test;'
parser p_mysql { kv-parser(value-separator(":") prefix(".mysql.")); };
訂閱:
文章 (Atom)