Author: Immanuel Raj

  • Why Do We Need God? Understanding Christian Faith and Purpose

    Why Do We Need God? Understanding Christian Faith and Purpose

    As young believers, we often ask questions about our faith. Why do we need God? What is our purpose after coming to Him? When should we start fulfilling His calling? Let’s explore these questions through the lens of the Bible.

    Why Do We Need God? Finding Purpose, Faith, and God’s Plan

    Romans 3:23 reminds us, “For all have sinned and fall short of the glory of God.” No one is perfect, and sin separates us from God. But the good news is that God is always willing to forgive.

    1 John 1:9 assures us, “If we confess our sins, He is faithful and just and will forgive us our sins and purify us from all unrighteousness.”

    We need God because He alone provides salvation, peace, and eternal life. Jesus said in John 14:6, “I am the way and the truth and the life. No one comes to the Father except through me.” Seeking God’s will, growing in faith, and finding spiritual fulfillment help us overcome life’s challenges. Discovering God’s plan for our lives, strengthening our relationship with Him, and walking in obedience to His Word brings true joy and purpose. Understanding Christian faith and purpose, learning how to live for God, and trusting God’s plan for our lives is essential in building a strong foundation in Him.

    The Importance of Faith in Christianity

    Faith is the key to our relationship with God. Trusting in His promises and believing in His purpose for our lives strengthens our journey. Walking with God daily and seeking His guidance through prayer and scripture keeps us aligned with His divine plan.

    What is God’s Purpose for My Life? Understanding My Calling

    Many people think that serving God means full-time ministry. But God has given each of us a unique calling. Some are called to go, while others are called to support, lead, or serve in different ways.

    1 Peter 4:10 says, “Each of you should use whatever gift you have received to serve others, as faithful stewards of God’s grace in its various forms.”

    Our purpose is to glorify God in whatever we do, whether in our jobs, studies, or ministry. Colossians 3:23-24 encourages us, “Whatever you do, work at it with all your heart, as working for the Lord, not for human masters.”

    Like the FMPB motto, “Go or Send,” we must recognize our role in God’s plan and live it out. Using our talents, discovering our divine calling, and trusting God’s plan are essential steps toward fulfilling our purpose. Finding our mission in life, honoring God through our work, and being a light to others in our daily walk will help us live with meaning. What is God’s calling for me? Seeking God’s will, walking with God daily, and engaging in Christian spiritual growth will lead to a deeper relationship with Him.

    How to Trust God’s Plan for My Life

    Many struggle with trusting God’s plan, but the Bible reassures us that He has a purpose for each of us. Proverbs 3:5-6 encourages us to “Trust in the Lord with all your heart and lean not on your own understanding; in all your ways submit to him, and he will make your paths straight.”

    When Should I Follow God’s Calling? Trusting God’s Timing

    The best time to seek God is now.

    Ecclesiastes 12:1 urges us, “Remember your Creator in the days of your youth, before the days of trouble come.”

    2 Corinthians 6:2 reminds us, “Now is the time of God’s favor, now is the day of salvation.”

    We don’t know how much time we have, so we must live with purpose today. Psalm 90:12 teaches us, “Teach us to number our days, that we may gain a heart of wisdom.”

    James 4:17 warns, “If anyone, then, knows the good they ought to do and doesn’t do it, it is sin for them.”

    Don’t wait for a perfect time—start fulfilling God’s purpose now! Committing to a Christ-centered life, walking in faith, and embracing spiritual growth will help you stay aligned with God’s will. Taking daily steps to seek God’s wisdom, serving others with love, and trusting in His perfect timing will help us experience His abundant blessings. When should I follow God? How to strengthen my faith in God? Walking with God and living a God-centered life allows us to experience His divine plan.

    How to Strengthen My Faith in God

    Spiritual growth requires daily commitment. Reading the Bible, praying, and surrounding ourselves with other believers strengthens our faith and helps us stay firm in God’s plan.

    Conclusion: Living a Purpose-Driven Life in Christ

    We need God because He alone offers salvation and forgiveness. Once we come to Him, our purpose is to serve Him in whatever way He has planned for us. And the best time to start is today. Let’s commit to walking with God, seeking His guidance, and living out His purpose for our lives. Strengthening your faith, growing in spiritual wisdom, and finding God’s direction will lead to a meaningful life. Seeking God’s will, deepening our faith, living a purpose-driven life, and embracing biblical wisdom will keep us on the right path. Trusting God in difficult times, overcoming doubt in faith, and finding God’s will for our lives through biblical guidance are steps toward a Christ-centered life.

    What step will you take today to grow closer to God and live out your calling? Let me know in the comments!

  • Automating Flask Deployment Using Docker and Github Actions

    Automating Flask Deployment Using Docker and Github Actions

    I’ve had enough of tedious deployments, so I decided to become proficient in automating the process. While I’ve previously used GitHub Actions to streamline deployments, I had yet to do so in a Flask environment. In this article, I’ll outline the steps I followed to deploy my Flask backend using Docker and GitHub Actions, specifically focusing on Flask Docker GitHub Actions.

    1. Creating a Flask Application for Docker Deployment

    The first step is to create a simple Flask application. The default Flask homepage example works perfectly, but I modified it to use a different port to avoid conflicts.

    app.py

    from flask import Flask
    
    app = Flask(__name__)
    
    @app.route("/")
    def hello_world():
        return "<p>Hello, World!</p>"
    
    if __name__ == "__main__":
        app.run(host="0.0.0.0", port=5001)

    2. Setting Up the Dockerfile for Your Flask Docker App

    Using Docker is one of the simplest ways to deploy our application. It not only eases the deployment process but also allows us to automate it effectively. Flask requires a WSGI server for production deployment. In this article, I’ll be using Gunicorn as the WSGI server.

    Now, let’s talk about Flask production deployment before we get into Docker. Similar to Django, Flask requires a WSGI server in order to launch the program. The two most popular WSGI servers are Gunicorn and UWSGI. I won’t delve into specifics about how they differ from one another. Gunicorn will be used as the WSGI server in this article.

    Here is the Dockerfile.

    Dockerfile

    FROM python:3.12-slim
    
    # Then, we need to copy requirements.txt
    COPY requirements.txt /tmp/requirements.txt
    
    # Last, we install the dependency and then we can start the Gunicorn.
    RUN pip install -r /tmp/requirements.txt
    
    COPY . /tmp/app
    
    WORKDIR /tmp/app
    
    CMD ["gunicorn", "--bind", "0.0.0.0:5001", "app:app"]

    Remember to include flask and gunicorn in requirements.txt.

    You can build and run the Dockerfile using the following commands

    docker build -t flask-deployment .
    docker run -dp 5001:5001 flask-deployment

    3. Automating Flask Deployment with GitHub Actions for Docker

    Now that we have our Dockerfile ready, it’s time to automate the deployment with GitHub Actions. Here’s a simple workflow to achieve this

    name: deploy
    
    on:
      push:
        branches:
          - master
    
    jobs:
      build:
        runs-on: self-hosted
        steps:
        - uses: actions/checkout@v4
        - name: Build and Run Docker
          run: docker build -t tutorial/flask-deployment . && docker run -dp 5001:5001 tutorial/flask-deployment

    This setup triggers the workflow only when changes are pushed to the master branch and runs on my personal virtual machine.

    After committing the deploy.yml file, use curl localhost:5001 from your server. You should see “Hello, World!”—an indication that your Flask app is up and running, and the automation is working!

    Troubleshooting Flask Docker Github Actions

    If you encounter any issues during deployment, double-check your configurations and logs to identify the root cause.

    Conclusion

    Automating Flask deployment using Docker and GitHub Actions not only saves time but also reduces errors in the deployment process. By following these steps, you can ensure a smoother workflow for your projects.

  • Retrofitting Node 20 in Ubuntu 18 LTS

    Retrofitting Node 20 in Ubuntu 18 LTS

    One of the hardest parts of system administration is keeping systems updated and secure. However, when an upgrade is not feasible but application support is needed, there’s often no choice but to retrofit the necessary dependencies. This may involve tinkering with the system, patching, or using custom configurations to ensure the application works without breaking other systems or the OS itself.

    In this guide, we will explore how to install Node.js 20 on Ubuntu 18.04 LTS, along with npm 10, using Node Version Manager (NVM) to streamline the process. By retrofitting Node.js 20 on an older version of Ubuntu, we can ensure compatibility without the need for a full OS upgrade, which may not always be practical in production environments.

    To ease up things and to have less headache we will be using NVM (Node Version Manager) to make it easier to install node and npm.

    To ease things up, NVM will help manage the Node.js and npm versions required, making the installation smoother and reducing the potential for conflicts.

    Why on Ubuntu 18.04?

    As applications evolve, newer versions of tools like Node.js introduce better performance, enhanced security features, and new functionality. However, upgrading the entire operating system to support the latest versions isn’t always an option, especially in cases where legacy applications or system constraints exist. Ubuntu 18.04 LTS, being a stable and widely-used release, may still be part of many production systems. This guide helps maintain compatibility by retrofitting Node.js 20 into your environment without breaking other dependencies.

    Now, let’s dive into retrofitting on Ubuntu 18.04.3 LTS.

    What You Need

    Before we get started, here are a few things you will need

    • Root access on your system
    • Node
    • NPM
    • Some essential build tools like make, gcc, and bison

    Run the following commands to install the necessary build tools and dependencies

    sudo apt-get install g++ make gcc bison patchelf
    wget -c https://ftp.gnu.org/gnu/glibc/glibc-2.28.tar.gz
    tar -zxf glibc-2.28.tar.gz
    cd glibc-2.28
    mkdir glibc-build && cd glibc-build
    ../configure --prefix=/opt/glibc-2.28
    make && make install
    patchelf --set-interpreter /opt/glibc-2.28/lib/ld-linux-x86-64.so.2 --set-rpath /opt/glibc-2.28/lib/:/lib/x86_64-linux-gnu/:/usr/lib/x86_64-linux-gnu/ /root/.nvm/versions/node/v20.12.2/bin/node

    This will fix the following issue

    node: /lib/x86_64-linux-gnu/libc.so.6: version 'GLIBC_2.28' not found (required by node)

    Conclusion

    By following these steps, you can successfully retrofit . Using NVM simplifies managing versions, and the manual GLIBC patch ensures compatibility without the need for a full OS upgrade. This approach allows you to keep your system stable while still using the latest version of Node.js for your applications.

  • Fail2Ban – SSH, WordPress[ee] and Cloudflare

    Fail2ban is an open-source intrusion prevention software framework that aims to protect computer servers from brute-force attacks. It works by continuously monitoring various log files for patterns indicating failed login attempts or other suspicious activity. When it detects such patterns, it can take action by dynamically updating firewall rules to block the source of the suspicious activity, thereby preventing further unauthorized access attempts. Fail2ban is highly configurable and widely used to enhance the security of servers, particularly those exposed to the internet.

    Why is it important ?

    Fail2ban is important for several reasons:

    1. Enhanced Security: By automatically blocking IP addresses that exhibit suspicious behavior, Fail2ban helps to protect servers from unauthorized access attempts, brute-force attacks, and other malicious activities.
    2. Reduced Risk of Compromise: By quickly responding to potential security threats, Fail2ban reduces the window of opportunity for attackers to compromise server systems.
    3. Cost-Effective Security Measure: Fail2ban is open-source software, meaning it’s freely available and can be easily integrated into existing server setups without significant financial investment.
    4. Customizable: Fail2ban is highly configurable, allowing system administrators to tailor it to their specific security needs and adjust settings based on the unique requirements of their servers.
    5. Automated Response: Fail2ban automates the process of detecting and responding to security incidents, reducing the burden on system administrators and ensuring a timely response to potential threats.

    Overall, Fail2ban plays a crucial role in strengthening the security posture of servers and mitigating the risk of unauthorized access and system compromise.

    What can it do ?

    Fail2ban primarily serves as an intrusion prevention system, and it can perform several key functions:

    1. Monitoring Log Files: Fail2ban continuously monitors log files generated by various services such as SSH, FTP, Apache, Nginx, and others.
    2. Detection of Suspicious Activity: It analyzes log entries in real-time to detect patterns indicative of potentially malicious behavior, such as multiple failed login attempts, authentication errors, or other anomalies.
    3. Dynamic Firewall Rules: Upon detecting suspicious activity, Fail2ban dynamically updates firewall rules (e.g., iptables on Linux systems) to block the IP address associated with the detected activity. This prevents further access attempts from the same source.
    4. Temporary Bans: Fail2ban typically imposes temporary bans on offending IP addresses, preventing access for a configurable period. This approach helps to mitigate the risk of legitimate users being permanently locked out due to mistaken identity or transient issues.
    5. Alerting: Fail2ban can also be configured to send notifications or alerts to system administrators when suspicious activity is detected, allowing for timely investigation and response.
    6. Whitelisting and Custom Rules: It supports the configuration of whitelists to exempt trusted IP addresses from being blocked and allows for the creation of custom rules to target specific types of suspicious activity.

    Overall, Fail2ban provides a comprehensive set of features to enhance the security of servers by proactively identifying and mitigating potential threats in real-time.

    What to do with this !?

    While this is a really nice tool, which we can use for us to get our job done it can be hard to understand at first or to configure what we need.

    https://github.com/iamimmanuelraj/fail2ban

    This repository takes the opensource fail2ban tool and configures it to work with EasyEngine to block wordpress login attempts and ban them in DNS level [Cloudfalre only] and also ban any unnecessary and bad ssh login bruteforce.

    This tool is only helpful when using you have a proper ssh key based authentication and a strong password for wordpress logins

    Currently the setup.sh script does some basic configuration needed for all the setup.

    It is configured to report the abused ip address also to abuseipdb.com

    What does this do

    • Uses fail2ban
    • Bans bad ssh login actors
    • Bans bad wordpress login actors [EasyEngine setup only]
    • Stops wordpress bad login actors in DNS level [Works with cloudflare only]
    • Reports the ip that abuses you to abuseipdb.com
  • Cloudflare Header Tips

    I am pretty sure any developer would have known about cloudflare.

    Be it for Hosting, Ai, Storage, Mail, Security, Infra, Tunnels, DNS etc.

    When you are a guy who uses free stuff to get things done for you, you cannot pay for things that you want, but also cannot stop from getting good things.

    Not everyone gets all good things done for free.

    Having our own server to host things has our own good and bad.

    I do not have my own server, i am using github actions.

    I want all good things in security. In the context of this one its SSL

    One of the most used SSL Tester for testing site SSL security is

    It is kinda hard to get a A+ if you are using free services for hosting as all do not see and work based on this

    I use different free hosting server and Cloudflare for DNS and SSL as well. I got D on this when i was first testing. I was really shocked on how low secure my site was.

    I was searching for ways to improve security without paying as well.

    Luckily in Cloudflare i had options to add Headers i want within the cloudlfare itself. It allows me to set header like i have a own server and doing my own hosting.

    It also gives me option to add some recomended security headers by with just one click

    These are some of the basic and much needed security headers which no one knows when using cloud flare and also cloudflare does not enabled it by default for some reason.

    This can be found after logging into your account and > Select domain > Rules > Transform Rules > Managed Transforms

    These are quiet necessary and increases security by a lot

    If you want anyone who access the site to have extra security or if you need to add some headers then you can add them with the following option

    This can be found after logging into your account and > Select domain > Rules > Transform Rules > Modify Response Header

    You can see i have added some as per my need. You can configure what best suits you as per your need.

  • Demystifying MySQL’s UTF8MB4: A Guide to Character Encoding in Databases with WordPress in GCP and Cloud SQL

    UTF8MB4

    Introduced in MySQL version 5.5.3, is an extension of the UTF-8 character encoding scheme. While UTF-8 can encode 1.1 million characters, UTF8MB4 can encode the full range of Unicode characters, including emojis and characters outside the Basic Multilingual Plane (BMP).

    • utf8mb4: A UTF-8 encoding of the Unicode character set using one to four bytes per character.
    • utf8mb3: A UTF-8 encoding of the Unicode character set using one to three bytes per character.

    In MySQL utf8 is currently an alias for utf8mb3 which is deprecated and will be removed in a future MySQL release. At that point utf8 will become a reference to utf8mb4.

    So regardless of this alias, you can consciously set yourself an utf8mb4 encoding.

    UTF-8 is a variable-length encoding. In the case of UTF-8, this means that storing one code point requires one to four bytes. However, MySQL’s encoding called “utf8” (alias of “utf8mb3”) only stores a maximum of three bytes per code point.

    The utf8mb4 character set is useful because nowadays we need support for storing not only language characters but also symbols, newly introduced emojis, and so on.

    Cloud SQL – GCP

    Google Cloud Platform [GCP] by default enables and uses utf8 when creating a Database and that is aliased to utf8mb3 which is okay for most cases.

    I was trying this out when using wordpress and it was not enough of it for me.

    So i was searching for a best option and i found out that utf8mb4 was the go to solution for it.

    The very famous and professional WordPress VIP also uses the same thing. So it is really a good practise and important one to do when it comes to db and wordpress.

  • Piping Bash

    Bash is good, ZSH is also good….but the fact that they both do not allow piping is bad.

    Well technically they do work, but it works in a different way.

    Sometimes its okay for us, rest times it is not.

    So what’s the issue here…

    Say you have a code in your bash script like this

    apt install fail2ban
    ...
    read -sp "Enter your Name : " name
    sed -i "/name =/ s/$/$name/" /etc/hostname
    ...

    The above code can work if we save it inside a script.sh and also normally when piping..but it does not work as we expected [Does not wait and get the input from read instead it goes and skip to the next command regardless of what the previous command was]

    This is because when you pipe it to bash…it does execute it one by one and not line by line…when these both can seem similar…it is clearly not.

    You can think this like a rude boss who does not show respect but needs tons of respect. Say you say good morning when your boss comes in but he just walks away

    But when you do the same when saving it in a script its like giving a handshake to your boss and then him waiting and shaking back until you both mutually leave hand.

    There are fixes tho. like you can add read -sp "Enter your Name: " name < /dev/tty to link it with a terminal input and it should work as expected as we wanted. But this works only in TTY.

  • Pa11y and Pa11y-CI Accessibility testing

    Accessibility is a really important thing when it comes to making a good site and make it available for all to use the site and for crawlers to make things easy as well. Automating this and making the best of the site to keep going faster without worry about accessibility and let the bot do the report for you so that you can fix later is the best part. So let’s do this automation.


    Pa11y (Pally)

    A command-line interface which loads web pages and highlights any accessibility issues it finds. Useful for when you want to run a one-off test against a web page.


    I will be covering what their documentation forgot to document (that which i leant from my own testing and learning).


    Running test with sitemap

    Pa11y and Pa11y-Ci says that you can do testing with sitemap.xml files while its really good. Here are the things that they didn’t document well

    • Pa11y can scan and add sub sitemap links and do scan on them
    • Pa11y can parse sitemap index and add links and sub sitemap files automatically and parse them and it to the list of links to test and also test it automatically
    • Pa11y can run sitemap test and local html file test and test based on config files all at once (All can be done at same time)

    Running test with local html files

    Pa11y and Pa11y-Ci says that you can do testing with local html files while its really good. Here are the things that they didn’t document well

    • Pa11y can scan local html files while still using sitemap or json based config (All can be done at same time)

    Running test with config.json

    Pa11y and Pa11y-Ci says that you can do testing with config.json files while its really good. Here are the things that they didn’t document well

    • Pa11y can do scans with config.json files while still using sitemap or local html based config (All can be done at same time)

    Bonus

    Accessibility testing automation with github actions

        steps:
    - name: Setup node
    uses: actions/setup-node@v4
    with:
    node-version: 18

    - name: Install pa11y-ci.
    run: npm install -g pa11y-ci

    - name: Checkout source.
    uses: actions/checkout@v4

    - name: Install Chrome.
    uses: browser-actions/setup-chrome@v1
    with:
    chrome-version: 120

    - name: Run pa11y-ci.
    run: pa11y-ci https://immanuelraj.dev
  • Rootless docker

    References

    ASSUMPTIONS

    1. You have no docker as root installed
    2. You have no root access (Not even use apt install)
    3. You want to install rootless and systemd style docker and docker compose and all other plugins
    4. You have a Ubuntu system – Latest (or VM) with default kernel
    5. Your Ubuntu distribution is not cutdown version

    Docker Root Installation Pre-Requisites

    loginctl enable-linger rtcamp # It creates /run/user/$(id -u)

    export XDG_RUNTIME_DIR=/run/user/$(id -u)

    echo "export PATH=/home/rtcamp/bin:\$PATH" >> ~/.bashrc

    echo "export DOCKER_HOST=unix:///run/user/\$(id -u)/docker.sock" >> ~/.bashrc

    Docker Installation

    curl -fsSL https://get.docker.com/rootless | sh

    Installing docker compose

    DOCKER_COMPOSE_VER=$(curl --silent https://api.github.com/repos/docker/compose/releases | jq -r '.[0].tag_name')

    DOCKER_CONFIG=${DOCKER_CONFIG:-$HOME/.docker} # only for the script to run

    mkdir -p $DOCKER_CONFIG/cli-plugins

    curl -SL https://github.com/docker/compose/releases/download/${DOCKER_COMPOSE_VER}/docker-compose-linux-x86_64 -o $DOCKER_CONFIG/cli-plugins/docker-compose

    chmod u+x $DOCKER_CONFIG/cli-plugins/docker-compose

    It all works 🎉 !

    But here are some common problems

    Network slow

    For this you can refer to https://docs.docker.com/engine/security/rootless/#network-is-slow

    Using privileged ports aka <1024

    #### Login to user having sudo access ####

    cp -v /etc/sysctl.conf /tmp/sys.conf
    echo "net.ipv4.ip_unprivileged_port_start=0" >> /tmp/sys.conf

    sudo mv -v /tmp/sys.conf /etc/sysctl.conf

    sudo sysctl --system
  • Github Rate limits

    Rate limits are sometimes scary. Sometimes its temporary, some time it goes away soon…some time it takes like foreverrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr

    I got rate limited by github by using its gh cli inside github actions. Well i didnt spam it but i was using more of it in a very less amount of time. Ran the action almost 200 times a day and the amount of action it did inside that 200 run each time is huge considering what i was doing (Was testing and developing a tool to run with github a actions)

    Would have been nice for a test environment if github was providing one but sadly they dont…..

    Also i was using my own (The company’s self hosted github actions server)

    And finally i got blocked and rate limited heavily.

    I was curious to find out what my limits are for these actions and i found this out

    curl -L \
    -H "Accept: application/vnd.github+json" \
    -H "Authorization: Bearer <YOUR-GITHUB-TOKEN>" \
    -H "X-GitHub-Api-Version: 2022-11-28" \
    https://api.github.com/rate_limit

    More info here https://docs.github.com/en/rest/rate-limit/rate-limit?apiVersion=2022-11-28

    And when you run the above code you shall be able to see and know you rate limits.

    Some limits are for per minute and per hour and per day basis so be careful when exhausting them

    You should get something like this as out for that code

  • Certbot SSL Limits – letsencrypt (Rate limit)

    >c=certificates

    5c – Per domain Per Day

    300c – In 3 Hrs Max

    50c – Per Week Per domain

    When limitations are crossed we will be rate limited or even banned

    – We can be banned by our domain name / Email / IP Address / IP Address range

    Read More here :- https://letsencrypt.org/docs/rate-limits/

    You can use the staging / test server when using for testing https://letsencrypt.org/docs/staging-environment/

  • AH shit here we go again !

    I know i know…..its really annoying when we type something without sudo and it asks for sudo and now we have to retype the whole things with sudo (or go to the previous command [with up arrow] and then move back with arrow keys or home key or anything…)

    while its a good approach…there is even a better approach to do it. It helps all the time and in many ways not only in sudo

    !!

    YESSS….that’s all it

    The !! stores the previously ran command in the memory. This can be used with sudo like

    sudo !!

    !!

    Literally anything we cant we can use it with…even in subshell or shell scripts or automation or anything

    Have fun

  • Finding the user

    To know who is using your system/server right now you can use

    who

    or

    w

    it will show you who is in and what they are doing, when did they login etc

  • Power to you.

    You always dont need access, you just need access to somone who has access

    With that said. Most of the times you dont even need access to do somethin. You just need access to someone who has access.

    Lets say i am a normal user and i need to run a script in linux where the file nor me have permission and access to run or change the permission

    If you see the test.sh file. It does not have access to run.

    BBBUUUTTTTTTTT…..

    SEEEEEEE…..

    It ran without doing any changes to its permission.

    Since /bin/bash has the permission, if you run the program with it then it shall work

    This is why you need to be very careful when giving permission and this is how hackers get access in some cases.

  • WordPress and Nginx (OpenResty)

    WordPress

    WordPress is a web content management system. It was originally created as a tool to publish blogs but has evolved to support publishing other web content, including more traditional websites, mailing lists and Internet forum, media galleries, membership sites, learning management systems and online stores.


    Ngins (Open Resty)

    OpenResty® is a full-fledged web platform that integrates our enhanced version of the Nginx core, our enhanced version of LuaJIT, many carefully written Lua libraries, lots of high quality 3rd-party Nginx modules, and most of their external dependencies. It is designed to help developers easily build scalable web applications, web services, and dynamic web gateways.


    File Structure

    The file structure is important to know which files to modify and which ones to not. Some are auto generated and auto modified (as we modify the settings in the admin GUI dashboard. Some are done manually [Only a very few things, that too very rarely])

    https://www.wpbeginner.com/beginners-guide/beginners-guide-to-wordpress-file-and-directory-structure/

    The above link talks about some basic file structure.


    Working


    Installing

    The installation can be done by 2 methods

    • Manually unpacking files (from tar)
    • Using WP-CLI (Almost Automated)

    Manual Installation

    Manual installation implies creating a repository, then downloading the package(tar) from wordpress and then unpacking it in a repository and then setting up the configs and database that is needed.

    https://www.digitalocean.com/community/tutorials/how-to-install-wordpress-with-lemp-on-ubuntu-18-04

    More about the manual method above.

    WP-CLI

    WP-CLI basically stands for WordPress CLI and can be used to automate some task like creating db, site root directory, installation , configuration etc. This makes it easier to install WordPress on any machine at ease.

    https://make.wordpress.org/cli/handbook/guides/installing/#recommended-installation

    https://make.wordpress.org/cli/handbook/guides/quick-start/#practical-examples

    Steps to follow for automatic(wp-cli) method is above

    Nginx Config

    Both of these needs nginx configs setup manually only (Can be automated)

    ...
    http {
    include mime.types;
    default_type application/octet-stream;
    server {
    ...
    root /var/www/wordpress/wordpress.example.com;
    index index.php index.html;
    ...
    location / {
    index index.php index.html;
    try_files $uri $uri/ /index.php$is_args$args;
    }
    location = /favicon.ico { log_not_found off; access_log off; }
    location = /robots.txt { log_not_found off; access_log off; allow all; }
    location ~* \.(css|gif|ico|jpeg|jpg|js|png)$ {
    expires max;
    log_not_found off;
    }
    # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
    #
    location ~ \.php$ {
    root /var/www/wordpress/wordpress.example.com;
    fastcgi_pass 127.0.0.1:9000;
    fastcgi_index index.php;
    fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
    include fastcgi_params;
    }

    # deny access to .htaccess files, if Apache's document root
    # concurs with nginx's one

    location ~ /\.ht {
    deny all;
    }
    ...

    This is a basic config that needs to be done in order to have a fully working wordpress instalaltion.


    Possible Issues

    You can face issues in CSS breaking in the site if nginx is not configured properly or if ssl is not configured properly.

    Php Dependencies

  • Linux Saviours

    Few Basic linux (Ubuntu) stuff that will save you time and get the job done with ease.

    Linux and it's programs are by default built to allows users to accomplish tasks easily. But many people dont even know that there are many other useful commands and tools.

    Many people know the basics or only what they use more often, but there are more than what a general end user will know/use and what is included in the tool that the users may not be aware. These will make life easier for us. We shall see some of them here.

    Help

    There are a bunch of tool and places where we can get information and help about a certain tool or a command. Most of the the well build packages/application/tools have these things well documented and put in place as per need. But some custom tools/application/packages may not have it. Some of the things that can help us to get help are

    • man command(Manual) # Man Pages
      • man passwd
    • whatis command
      • whatis passwd
    • help/-help/–help argument
      • passwd -help
    • info command
      • info passwd
    • /usr/share/doc location
      • cat /usr/share/doc/passwd/*

    These are the commands and arguments and locations where we can find many useful information about any tool that has proper documentation.

    File System Structure

    Everything in linux is a file (just like how everything is a object in python) and so these needs to be well defined and well structured to have everything working properly and to have things in the right place.

    The "/" (Forward leaning slash) is the start of the file system (Also called as root of the file system). Everything in linux follow this tree structure only. Some most used and important directories are as follows.

    • Home Directories: /root/home/username
    • User Executables: /bin/usr/bin/usr/local/bin
    • System Executables: /sbin/usr/sbin/usr/local/sbin
    • Shared Libraries: /lib/usr/lib/usr/local/lib
    • Kernels & Boot-loaders: /boot
    • Configuration Files: /etc
    • Device Files: /dev
    • Temporary Files: /tmp
    • Other Mount-points: /media/mnt
    • Server Data: /var/srv
    • System Informations: /proc/sys
    • Optional Applications: /opt

    Sometimes when installing linux people tend to put different paths in different partitions. Even when it is in different partition’s the file system structure remains the same.

    This remains same by mounting it in the path or symlinking the partition in the root filestructure.

    File System (FS) is different from this. FS is all about partition scheme and partitioning and its methods and optimization etc.

    File(s) and Folder(s) Permission(s)

    Every file and folder has default and predefined permissions, they can be changed/removed/revoked etc by few commands. We need to be aware of what are the file and folder permissions to know what is happening and how to access it or why it is not accesible and who can access it.

    Yes linux and access control method of users and groups which can control access to a group of files/folders depending on the permission to a user or group. We can have more/less permission even without the group/user having enough permission over a file/folder/. This can be done by anyone who has higher privelages (root).

    These can be done by commands like

    • chown (Change Owner)
      • chown <owner name> <file|folder>
      • chown -R immanuel /
    sudo /etc/shadow # Will show the list of all owners
    • chmod (Change Mode)
      • chmod <permission> <file|folder>
      • chmod 777 /
    WhoPermission
    1st Digit: Owner Permission4 Read
    2nd Digit: Group Permission2 Write
    3rd Digit: Other Permission1 Execute
    • chgrp (Change Group)
      • chgrp <group> <file|folder>
    cat /etc/group # Shows all the groups

    Tab (Key/Space)

    The Tab key is one of the cool keys in keyboard for linux. It helps us in autofill and autcomplete.

    Say if you are going to a reposity that has averylongname then you dont have to type it completely. You just type a few characters like avery and then press tab and it will autocomplete it.

    If there are more than one of the same patterend names then press the tab key twice to show all the files/folders with the same names then you can choose which one to use.

    Tiled (~)

    This is used to go to home instantly or to create somethin in home directory.

    • cd ~ (Normally just cd will do the same)
    • cd ~testfolder/testfile

    Pipes (|,>,<,>>,&)

    Pipes are used in terminal to process or send input/output/error to a files or to the next command.

    command < fileSend file as a Input to the command.
    command > fileRedirect STDOUT of command to file.
    command >> fileAppend STDOUT of command to file.
    command 2> fileRedirect STDERR of command to file.
    command 2>> fileAppend STDERR of command to file.
    command1 | command2Redirecting Output to Program

    Text Processin in Terminal

    Here’s a description of the text processing in terminal

    • Cut
    • Wc
    • Head
    • Cat
    • Tail

    Cut:

    • Purpose: Extracts specific sections from each line of a file or input.
    • Syntax: cut [options] [file]
    • Common options:
      • -f fields: Selects specific fields (columns) based on delimiters.
      • -d delimiter: Specifies the delimiter to use (default is tab).
      • -c characters: Selects specific characters from each line.
    • Example: cut -f 1,3 -d ":" /etc/passwd
      • # Extract first and third fields (username and UID)

    WC:

    • Purpose: Counts the number of lines, words, and characters in a file or input.
    • Syntax: wc [options] [file]
    • Common options:
      • -l: Counts lines.
      • -w: Counts words.
      • -c: Counts characters.
    • Example: wc -lw myfile.txt
      • # Count lines and words in myfile.txt

    Head:

    • Purpose: Displays the first few lines of a file.
    • Syntax: head [options] [file]
    • Common options:
      • -n number: Specifies the number of lines to display (default is 10).
    • Example:head -n 5 myfile.txt
      • # Show the first 5 lines of myfile.txt

    Cat:

    • Purpose: Concatenates files and prints their contents to the standard output.
    • Syntax: cat [options] [file1] [file2]...
    • Common options:
      • -n: Numbers all lines.
      • -e: Displays non-printing characters.
    • Example:cat -n file1.txt file2.txt
      • # Display contents of both files with line numbers

    Tail:

    • Purpose: Displays the last few lines of a file.
    • Syntax: tail [options] [file]
    • Common options:
      • -n number: Specifies the number of lines to display (default is 10).
      • -f: Follows the file, displaying new lines as they are added.
    • Example:tail -f mylog.txt
      • # Monitor the log file for new entries

    Diff:

    • Purpose: Compares two files and displays their differences. Syntax: diff [options] file1 file2
    • Common options:
      • -u: Produces a unified diff, showing differences in a more readable format.
      • -c: Produces a context diff, showing differences along with surrounding lines.
      • -y: Produces a side-by-side diff, showing files in two columns. 
    • Example: diff -u old_file.txt new_file.txt
      • # Shows differences between the files in a unified format

    Grep:

    • Purpose: Searches for lines matching a pattern in a file or input. Syntax: grep [options] pattern [file]
    • Common options:
      • -i: Ignores case distinction.
      • -v: Inverts the match, printing lines that don’t match the pattern.
      • -r: Recursively searches for patterns in all files within a directory.
      • -n: Prints line numbers of matching lines.
    • Example: grep -i “error” log.txt
      • # Finds lines containing “error” (case-insensitive)

    Sed:

    • Purpose: Stream editor for modifying text files. Syntax: sed [options] 'command' file
    • Common commands:
      • s/pattern/replacement/: Substitutes pattern with replacement.
      • d: Deletes lines matching a pattern.
      • p: Prints lines matching a pattern.
      • i: Inserts text before a line.
      • a: Appends text after a line.
    • Example: sed ‘s/localhost/myserver.com/g’ config.txt
      • # Replaces “localhost” with “myserver.com” globally

    Network Interface Configuration

    Purpose: Manages network interfaces on a Linux system.

    Tools:

    • ifconfig: Displays information about network interfaces and enables/disables them.
    • system-config-network: Graphical tool for configuring network interfaces (may not be installed by default).
    • Network Configuration Files: Store settings in /etc/sysconfig/network-scripts/ directory.

    Key Concepts:

    Network Interface Files:

    • Located in /etc/sysconfig/network-scripts/ifcfg-ethX (replace ethX with the interface name).
    • Contain settings in the format VARIABLE=VALUE.
    • Common settings:
      • DEVICE: Interface name.
      • HWADDR: MAC address (optional).
      • BOOTPROTO: DHCP or static IP configuration.
      • IPADDR: IP address (for static configuration).
      • NETMASK: Network mask (for static configuration).
      • GATEWAY: Default gateway (for static configuration).
      • ONBOOT: Whether to bring up the interface at boot.
      • USERCTL: Whether to allow non-root users to control the interface.
      • TYPE: Interface type (e.g., Ethernet).

    Global Network Settings:

    • Located in /etc/sysconfig/network.
    • Common settings:
      • NETWORKING: Enables/disables networking.
      • HOSTNAME: System hostname.
      • GATEWAY: Default gateway (can be overridden in interface files).

    Enabling/Disabling Interfaces:

    • ifup ethX: Brings up interface ethX.
    • ifdown ethX: Brings down interface ethX.

    DNS Configuration:

    • Domain Name Service (DNS) translates hostnames to IP addresses.
    • DNS server addresses specified in /etc/resolv.conf or by DHCP.
    • Common settings:
      • search: Domain names for incomplete hostnames.
      • nameserver: IP addresses of DNS servers.

    Local DNS Server Configuration:

    • Uses /etc/resolv.conf.
    • Order of nameserver entries is important (fastest and available servers first).

    Process Management

    Purpose: Manages processes running on a Linux system.

    Key Concepts:

    Processes:

    • Sets of instructions loaded into memory.
    • Identified by unique Process IDs (PIDs).
    • Have UIDs, GIDs, and SELinux contexts for filesystem access.
    • Tracked in the /proc filesystem.

    Viewing Processes:

    • ps: Lists processes, with options for:
      • All terminals (a)
      • Non-terminal processes (x)
      • User information (u)
      • Parentage (f)
      • Custom output (o)
    • pgrep: Finds processes by predefined patterns.
    • pidof: Finds processes by exact program names.

    Process States:

    • Running: Actively using the CPU.
    • Sleeping: In memory but inactive.
    • Uninterruptable Sleep: Waiting for a resource, cannot be woken by signals.
    • Zombie: Terminated but not fully flushed from process list.

    Signals:

    • Simple messages sent to processes using commands like kill.
    • Processes respond to signals they’re programmed to recognize.
    • Common signals:
      • HUP (1): Reread configuration files.
      • KILL (9): Terminate immediately.
      • TERM (15): Terminate cleanly.
      • CONT (18): Continue if stopped.
      • STOP (19): Stop process.

    Sending Signals:

    • kill: By PID.
    • pkill: By pattern.
    • killall: By name.

    Scheduling Priority:

    • Determines how processes share the CPU.
    • Affected by “nice” value (-20 to 19, default 0).
    • Lower nice value = higher priority.

    Altering Priority:

    • nice: When starting a process.
    • renice: After process has started (root only can decrease).

    Process Management Tools:

    • tophtop: CLI tools for real-time process monitoring and management.
    • gnome-system-monitor: GUI tool for process management.

    Alias

    Purpose: Creates shortcuts for frequently used commands or combinations of commands.

    Syntax:

    • alias [new_name]="[original_command]"

    Examples:

    • alias ll="ls -l"
    • alias rm="rm -i" (prompts for confirmation before deletion)
    • alias grep="grep --color=auto" (adds color highlighting to grep output)
    • alias myip="curl ifconfig.me"
    • alias hist="history | grep"

    Key Points:

    • Aliases are temporary by default, lasting only for the current shell session.
    • To make aliases permanent, add them to your shell configuration file (~/.bashrc for Bash).
    • Aliases can be nested (aliases within aliases).
    • Use alias by itself to list all defined aliases.
    • Use unalias [alias_name] to remove an alias.

    Common Use Cases:

    • Shortening long commands.
    • Adding default options to commands.
    • Creating custom commands for specific tasks.
    • Correcting common typos.
    • Personalizing your shell environment.

    Find and Locate

    Purpose: Find files and directories on a Linux system.

    Key Differences:

    Featurefindlocate
    Search methodSearches the filesystem in real timeUses a pre-built database for faster searches
    AccuracyAlways up-to-dateMight miss recently added files if database is not updated
    SpeedSlower for large searchesFaster for most searches
    FlexibilityMore options for fine-grained controlLimited options

    Find Command:

    Syntax:

    • find [path] [expression]

    Common Expressions:

    • -name: Find by filename.
    • -type: Find by file type (e.g., f for files, d for directories).
    • -size: Find by file size (e.g., +10M for files larger than 10MB).
    • -mtime: Find by modification time (e.g., -7 for files modified within the last 7 days).
    • -exec: Execute a command on each found file.

    Examples:

    • find /home -name "*.txt": Find all text files in /home.
    • find . -type f -size +10M: Find files larger than 10MB in the current directory.
    • find /var/log -mtime +30 -exec rm {} \;: Delete log files older than 30 days.

    Locate Command:

    Syntax:

    • locate [pattern]

    Key Points:

    • Relies on a database updated by updatedb (usually run daily by cron).
    • Use updatedb manually to update the database before using locate.
    • Faster for general searches but might miss recently added files.

    Examples:

    • locate "*.jpg": Find all JPEG files.
    • locate "report.pdf": Find a file named “report.pdf”.

    Best Practices:

    • Use find for real-time, accurate searches with flexible options.
    • Use locate for quick, general searches when database is up-to-date.
    • Consider using updatedb before locate to ensure best results.

    System Logs

    Purpose: Centralized storage and management of messages, errors, and debugging information from applications on Red Hat Enterprise Linux systems.

    Common Log Files:

    • /var/log/dmesg: Kernel messages from the boot process.
    • /var/log/messages: Standard system log containing messages from system software, non-kernel boot issues, and dmesg output. (Readable by root only.)
    • /var/log/secure: Security-related messages and errors (logins, TCP wrappers, xinetd). (Readable by root only.)
    • /var/log/audit/audit.log: Audited messages from the kernel, including SELinux messages. (Use ausearch and aureport to view.)

    Monitoring System Logs:

    • Use text editors or tools like less or tail to view log files.
    • Employ tools like grep to filter specific content.

    Generating Log Messages:

    • Use logger(1) to manually generate log messages.

    Audit Log Tools:

    • ausearch: Search for specific events in the audit log.
    • aureport: Generate reports based on audit log data.

    Example:

    • Generate a log message: logger This is a test message.
    • View the message in /var/log/messagestail -n1 /var/log/messages
  • The Philosophy and History of Linux

    The Philosophy and History of Linux


    Introduction

    Linux is deeply rooted in the principles of Open Source software, a movement that promotes transparency, collaboration, and freedom in software development. This article explores the foundations of Open Source, the history of Linux, and the guiding philosophies that shape its ecosystem today.


    Understanding Open Source

    The Open Source Initiative

    Open Source software is built on key principles that ensure accessibility and collaboration:

    • Free access and distribution – The software, along with its source code, is freely available to the public.
    • Freedom to modify – Users can modify the source code and create derivative works.
    • Maintaining integrity – Code modifications are typically provided as patches to preserve the original developer’s work.
    • License inheritance – Redistribution of modified software must adhere to the original license terms.
    • Non-discriminatory licensing – Open Source licenses do not restrict users based on their identity, purpose, or industry.

    The Free Software Foundation (FSF)

    The Free Software Foundation (FSF) advocates for software freedom, emphasizing:

    • The right to run software for any purpose.
    • Access to source code to understand and modify how software works.
    • The ability to redistribute software freely.
    • The freedom to create and distribute modified versions.

    The Origins of Linux

    In 1984, the FSF launched the GNU Project to create a free, UNIX-like operating system. They developed replacements for UNIX utilities like bash, ls, and various system libraries. To uphold their commitment to software freedom, the FSF introduced the General Public License (GPL), which enforces the Four Freedoms of software usage and modification.

    However, the GNU Project lacked a functioning kernel, which is the core component of an operating system.

    In 1991, Linus Torvalds, a Finnish computer science student, developed an Open Source, UNIX-like kernel known as Linux. Released under the GPL, it quickly gained traction and was combined with GNU utilities, forming a fully functional, free, and Open Source operating system.

    Today, the Linux kernel powers everything from personal computers and servers to mobile devices and supercomputers.


    Core Principles of Linux

    1. Everything is a File

    In Linux, everything—including hardware devices—is represented as a file. This simplifies access and security management, as system utilities interact with hardware using the same methods as regular files.

    2. Small, Single-Purpose Programs

    Instead of large, multi-functional applications, Linux follows the UNIX philosophy of using small utilities that each perform one task exceptionally well.

    3. Composability: Chaining Programs Together

    Linux enables users to combine multiple small utilities by passing the output of one program as the input to another. This allows for highly flexible and powerful automation.

    4. Minimal Interactive Interfaces

    Most Linux commands expect arguments and options at execution, reducing unnecessary prompts. Interactive tools exist but are reserved for use cases where they make sense, such as text editing.

    5. Text-Based Configuration

    Configuration settings are stored in plain text files, making them easy to read, edit, transfer, and track using version control systems.


    Conclusion

    Linux, born from the Open Source movement, continues to thrive because of its transparent development model, strong community, and guiding principles. By embracing freedom, modularity, and efficiency, Linux has become one of the most influential operating systems in the world.

    If you’re interested in diving deeper, check out:

  • OpenResty SSL Setup: Install and Secure Site withNginx with Let’s Encrypt on Ubuntu

    OpenResty SSL Setup: Install and Secure Site withNginx with Let’s Encrypt on Ubuntu

    Securing your website with SSL is crucial for both security and SEO benefits. In this guide, we’ll walk through the complete process of OpenResty SSL setup, including how to install OpenResty on Ubuntu, configure SSL using Certbot, and enable auto-renewal.


    Prerequisites for OpenResty SSL Setup

    Before we begin, ensure you have the following:

    • A Linux server (Ubuntu preferred)
    • Root or sudo access
    • A registered domain name pointing to your server

    Step 1: Install OpenResty on Ubuntu

    OpenResty is an extended version of Nginx that includes powerful scripting capabilities. Follow the official OpenResty installation guide to install it on your Ubuntu server.

    Step 2: Install Certbot for OpenResty Nginx SSL

    Certbot is an automated tool for obtaining SSL certificates from Let’s Encrypt.

    To install Certbot, run:

    sudo apt update
    sudo apt install certbot -y

    Step 3: Obtain an SSL Certificate for OpenResty HTTPS Configuration

    Run the following command to generate an SSL certificate for your domain:

    sudo certbot certonly --standalone --preferred-challenges http -d example.com

    Replace example.com with your actual domain name.

    Once completed, your SSL certificates will be located in:

    /etc/letsencrypt/live/example.com/

    Step 4: Configure Nginx OpenResty SSL Settings

    Now, update your Nginx OpenResty configuration to use the SSL certificate.

    Open your configuration file:

    sudo nano /usr/local/openresty/nginx/conf/nginx.conf

    Nginx OpenResty HTTPS Configuration

    Add the following server block inside the http block:

    server {
        listen 443 ssl;
        server_name example.com;
    
        ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
    
        ssl_protocols TLSv1.2 TLSv1.3;
        ssl_ciphers HIGH:!aNULL:!MD5;
    
        location / {
            root /var/www/html;
            index index.html;
        }
    }

    Save and exit (Ctrl + X, then Y, then Enter).

    Step 5: Restart OpenResty Nginx for SSL to Take Effect

    Restart Nginx OpenResty for the changes to take effect:

    sudo systemctl restart openresty

    If Nginx OpenResty is not set up as a service, you may need to start it manually:

    sudo /usr/local/openresty/nginx/sbin/nginx

    Step 6: Auto-Renew OpenResty SSL Certificate with Certbot

    Let’s Encrypt certificates expire every 90 days, so setting up auto-renewal is important.

    Add the following cron job to renew the certificate automatically:

    sudo crontab -e

    Add this line at the end:

    0 0 * * * certbot renew --quiet && systemctl reload openresty

    This will check and renew the certificate daily at midnight.

    Conclusion

    Your Nginx OpenResty server is now secured with SSL! You’ve successfully completed the OpenResty SSL setup, installed Nginx OpenResty, configured SSL with Certbot, and set up auto-renewal for your certificates. Now, your website can securely serve content over HTTPS.

    For further reading, check out:

    If you have any questions or face issues, feel free to drop a comment below!

  • Step-by-Step Guide to Configuring Nginx and PHP-FPM on Ubuntu

    Step-by-Step Guide to Configuring Nginx and PHP-FPM on Ubuntu

    In this blog we will see how to properly configure Nginx and php-fpm in a linux (ubuntu) machine or server.


    Pre-Requisites:

    Linux (Ubuntu) Server.
    
    Php.
    
    Nginx.

    Considering that the linux server is already setup and running with ubuntu, lets get started with the Nginx and php fpm setup.


    Installing Nginx.

    You can follow THIS link and install Nginx

    Installing PHP

    You can follow THIS link and install PHP.


    Basic Nginx (V 1.24) Setup for PHP-FPM (8.4)

    server {
      ...
    # Add index.php nginx config
    index ... index.php ...;

    # Pass PHP scripts on Nginx to FastCGI (PHP-FPM) server
    location ~ \.php$ {
    include fastcgi_params;
    # Nginx php-fpm sock config:
    fastcgi_pass unix:/run/php/php8.1-fpm.sock;
    fastcgi_index index.php;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
    }

    # deny access to Apache .htaccess on Nginx with PHP,
    # if Apache and Nginx document roots concur
    location ~ /\.ht {
    deny all;
    }
    ...
    } # End of PHP FPM Nginx config example

    This is just a example config snippet and not the whole/actual configuration.

    Depending on your system’s default configuration you may need to change fastcgi_pass parameter.

    It can be of the following variation

    fastcgi_pass 127.0.0.1:9000; # For locally passing the php files
    
    fastcgi_pass unix:/run/php/php8.1-fpm.sock; # For using unix socket to pass the file

    unix socket location may vary per system (can be /run/php/php8.1-fpm.sock /var/run/php/php8.1-fpm.sock). This doesnt make a difference if the locations are symlinked or mounted. If incase it is not done then this needed to be noted carefully.

    Another way is to change the phpfpm pool.d/www.conf file to use local host instead of unix socket.

    Finally restart the php and nginx services.

    Here are some articles and blogs and links that will help you to learn and know more about these.

    https://nginx.org/en/docs/beginners_guide.html
    
    https://nginx.org/en/docs/switches.html
    
    https://www.nginx.com/resources/wiki/start/topics/tutorials/config_pitfalls/
    
    https://www.nginx.com/resources/wiki/start/topics/depth/ifisevil/
    
    https://easyengine.io/wordpress-nginx/why-nginx/
    
    https://serverguy.com/comparison/apache-vs-nginx/
    
    http://nginx.org/en/docs/http/ngx_http_core_module.html
    
    https://www.journaldev.com/26342/nginx-location-directive
    
    https://nginx.org/en/docs/beginners_guide.html
    
    https://nginx.org/en/docs/switches.html
    
    https://www.nginx.com/resources/wiki/start/topics/tutorials/config_pitfalls/
    
    https://www.nginx.com/resources/wiki/start/topics/depth/ifisevil/
    
    https://easyengine.io/wordpress-nginx/why-nginx/
    
    https://serverguy.com/comparison/apache-vs-nginx/
    
    http://nginx.org/en/docs/http/ngx_http_core_module.html
    
    https://www.journaldev.com/26342/nginx-location-directive
  • Oh-My-ZSH

    Oh-My-ZSH

    Oh My Zsh is a delightful, open source, community-driven framework for managing your Zsh configuration. It comes bundled with thousands of helpful functions, helpers, plugins, themes, and a few things that make you shout….

    Installing OH My ZSH

    Oh My ZSH is installed by running one of the following commands in your terminal. You can install this via the command-line with either curl or wget.

    Curl

    sh -c "$(curl -fsSL https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh)"

    Wget

    sh -c "$(wget https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh -O -)"

    If you are on a fresh installation of mac, then it will install some terminal tools. If your in other OS it may need some other dependency as well

    Oh My ZSH Themes

    OMZ [Oh-My-ZSH] supports many themes and plugins. These can be installed, set and used by the following commands.

    omz theme set|use <theme name>

    Example

    - omz theme set gnzh
    - omz theme use gnzh

    Alias

    OMZ allow setting of alliases in the zsh config file [.zshrc] itself. You can open the .zshrc file which is in the home folder and then go to the alias part and then add a alias you want. If the alias part does not exist then you can simply add it in the end of the file.

    # Custom made Alias.
    alias curlx="curl -XGET -IL"

    and then save it. You may need to restart your shell [sometimes logout and login or restart the system]

    Oh My ZSH Plugins

    Now we can move to installing plugins, another interesting thing by omz.

    To do so

    omz plugin load <plugin name>

    and then

    omz enable <plugin name>

    will do the job.

    Example

    - omz plugin load tmux
    - omz plugin enable tmux

    You need to have tmux installed for this to work.

    Customizing Tmux

    Tmux allow multiple customization. We shall customise the default key binding of the tmux from Ctrl + B to Ctrl + r

    To do it.

    - nano ~/.tmux.conf
    
    # Change the prefix key to C-a
    - set -g prefix C-a
    - unbind C-b
    - bind C-a send-prefix

    Save and relaunch terminal and tmux

  • How to Speed Up Windows 10: Advanced Boot Optimization (Part 3)

    Welcome to the third part of our comprehensive guide on optimizing Windows 10 performance. In this tutorial, we’ll focus on a powerful yet often overlooked method: configuring advanced boot options to utilize your processor’s full potential.

    Video Tutorial

    Before we dive into the written instructions, watch our step-by-step video guide:

    Can’t watch the video? No worries – follow our detailed written instructions below.

    Why Windows Optimization Matters

    Modern computers often ship with multi-core processors, but Windows doesn’t always automatically optimize its boot process to use all available cores. By manually adjusting these settings, you can potentially reduce boot times and improve overall system responsiveness.

    Prerequisites

    • Windows 10 operating system
    • Administrator access to your computer
    • A few minutes of your time
    • No additional software required

    Step-by-Step Guide

    1. Access System Configuration

    1. Press the Windows key + R to open the Run dialog
    2. Type msconfig and press Enter
    3. The System Configuration window will appear

    2. Modify Boot Settings

    1. Navigate to the “Boot” tab
    2. Click on “Advanced Options”
    3. In the new window, locate and check “Number of processors”
    4. From the dropdown menu, select the highest number available (this represents the maximum number of processor cores your system can use)
    5. Click “OK” to close the Advanced Options
    6. Click “Apply” in the System Configuration window
    7. Click “OK” to close System Configuration

    3. Complete the Process

    1. Windows will prompt you to restart your computer
    2. Choose whether to restart immediately or later
    3. The changes will take effect after the restart

    What to Expect

    After implementing these changes, you may notice:

    • Faster system boot times
    • Improved responsiveness during startup
    • Better overall system performance, especially during resource-intensive tasks

    Important Notes

    • This modification is safe and reversible
    • If you experience any issues, you can always return to System Configuration and uncheck the “Number of processors” option
    • The actual performance improvement will vary depending on your hardware specifications

    Additional Tips

    To maximize the benefits of this optimization:

    • Keep your Windows installation up to date
    • Regularly check for driver updates
    • Monitor your system’s performance to ensure the changes are beneficial

    Technical Background

    Windows 10, released in July 2015, introduced numerous performance improvements over its predecessors. While the operating system includes various automatic optimization features, some settings, like boot-time processor utilization, can benefit from manual adjustment to achieve peak performance.

    Next Steps

    Stay tuned for the next part in our Windows optimization series, where we’ll explore additional methods to enhance your system’s performance. Each optimization builds upon the previous ones, creating a comprehensively optimized Windows experience.


    Remember: Always create a system restore point before making any system configuration changes. While this modification is safe, it’s always good practice to have a backup plan.

  • How to speed up windows | PART 2

    In this, I will be showing you how to speed up windows.
    Please read until the last without skipping any step for better performance from your windows pc.

    Windows 10:

    Windows 10 is a series of operating systems produced by the Americanmultinationaltechnology company Microsoft and released as part of its Windows NT family of operating systems. It is the successor to Windows 8.1 (2013), released nearly two years earlier, and was released to manufacturing on July 15, 2015, and broadly released for the general public on July 29, 2015.Windows 10 was made available for download via MSDN and Technet and available as a free upgrade for retail copies of Windows 8 and Windows RT users via the Windows Store. Windows 10 receives new builds on an ongoing basis, which are available at no additional cost to users, in addition to additional test builds of Windows 10 which are available to Windows Insiders. Devices in enterprise environments can receive these updates at a slower pace, or use long-term support milestones that only receive critical updates, such as security patches, over their ten-year lifespan of extended support.

    There are so many ways to speed up windows 10 based on what we want…… here we will be showing you all the methods that will be useful to speed up your windows 10 machine without affecting the system and its performance.

    CHANGING ADVANCED SYSTEM SETTINGS:

      • To change advanced system settings first open up file explorer by pressing win key + e
      • Then right click “This pc” and select properties
      • Then press “Advanced System settings” 
      • Then in the Advanced Tab go to Performance section and press settings
      • Then press Adjust for best performance
      • Click apply and your done👍

      CHECKOUT VIDEO TUITORIAL FOR THIS METHOD HERE :

    1. How to speed up windows | PART 1

      In this, I will be showing you how to speed up windows.
      Please read until the last without skipping any step for better performance from your windows pc.

      Windows 10:

      Windows 10 is a series of operating systems produced by the Americanmultinationaltechnology company Microsoft and released as part of its Windows NT family of operating systems. It is the successor to Windows 8.1 (2013), released nearly two years earlier, and was released to manufacturing on July 15, 2015, and broadly released for the general public on July 29, 2015.Windows 10 was made available for download via MSDN and Technet and available as a free upgrade for retail copies of Windows 8 and Windows RT users via the Windows Store. Windows 10 receives new builds on an ongoing basis, which are available at no additional cost to users, in addition to additional test builds of Windows 10 which are available to Windows Insiders. Devices in enterprise environments can receive these updates at a slower pace, or use long-term support milestones that only receive critical updates, such as security patches, over their ten-year lifespan of extended support.

      There are so many ways to speed up windows 10 based on what we want…… here we will be showing you all the methods that will be useful to speed up your windows 10 machine without affecting the system and its performance.

      CLEANING UNNECESSARY FILES IN PC:

      1. To clean unnecessary files in pc go to search(press windows key + s)
      2. Then type disk cleanup and press and open it     

      3.Now press the “clean up system files button”

      4.Now select any option that you want and then press ok and then wait for it to finish this will clean up the unwanted files stored in your pc 

      CHECKOUT VIDEO TUITORIAL FOR THIS METHOD HERE :