_____ _ ___ _________ __ |_ _|__ _ __ _ __ ___ (_)_ __ __ _| \ \ / / ____\ \/ / | |/ _ \ '__| '_ ` _ \| | '_ \ / _` | |\ \ / /| _| \ / | | __/ | | | | | | | | | | | (_| | | \ V / | |___ / \ |_|\___|_| |_| |_| |_|_|_| |_|\__,_|_| \_/ |_____/_/\_\
The Linux terminal provides powerful tools for network management and diagnostics. Whether you need to check website availability, send API requests, or download files — terminal commands accomplish these tasks quickly and efficiently. In this guide, we will examine the most commonly used network commands in detail.
Network commands are critically important for system administrators and developers. You constantly need these tools for daily tasks such as diagnosing server issues, testing APIs, transferring files, and monitoring network performance. Learning these commands well will significantly improve your ability to resolve network problems quickly.
ping is the most fundamental tool for testing network connectivity. It sends ICMP (Internet Control Message Protocol) packets to measure whether a target server is reachable and its response time.
# Ping a server
$ ping google.com
PING google.com (142.250.185.206): 56 data bytes
64 bytes from 142.250.185.206: icmp_seq=0 ttl=118 time=12.3 ms
64 bytes from 142.250.185.206: icmp_seq=1 ttl=118 time=11.8 ms
64 bytes from 142.250.185.206: icmp_seq=2 ttl=118 time=12.1 ms
^C
--- google.com ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max/stddev = 11.8/12.1/12.3/0.2 ms
# Send a specific number of packets
$ ping -c 5 google.com
# Change packet size (default is 56 bytes)
$ ping -s 1000 google.com
# Change send interval (default is 1 second)
$ ping -i 0.5 google.com
# Show only summary information
$ ping -c 10 -q google.com
curl (Client URL) is an extremely versatile tool for transferring data over URLs. It supports HTTP, HTTPS, FTP, SFTP, and many more protocols. It is an indispensable command for web development and API testing.
# Simple GET request
$ curl https://api.example.com/users
# Show response headers too
$ curl -i https://api.example.com/users
# Show only response headers
$ curl -I https://api.example.com/users
# Display HTTP status code
$ curl -o /dev/null -s -w "%{http_code}" https://api.example.com/users
# Verbose output (for debugging)
$ curl -v https://api.example.com/users
# Follow redirects
$ curl -L https://example.com/redirect
# POST request with form data
$ curl -X POST -d "user=ali&password=123" https://api.example.com/login
# POST request with JSON data
$ curl -X POST -H "Content-Type: application/json" -d '{"name": "Ali", "email": "ali@example.com"}' https://api.example.com/users
# Send JSON data from a file
$ curl -X POST -H "Content-Type: application/json" -d @data.json https://api.example.com/users
# PUT request (update)
$ curl -X PUT -H "Content-Type: application/json" -d '{"name": "Ali Veli"}' https://api.example.com/users/1
# DELETE request
$ curl -X DELETE https://api.example.com/users/1
# Add a custom header
$ curl -H "Authorization: Bearer TOKEN123" https://api.example.com/profile
# Add multiple headers
$ curl -H "Authorization: Bearer TOKEN123" -H "Accept: application/json" -H "X-Custom-Header: value" https://api.example.com/data
# Basic authentication
$ curl -u username:password https://api.example.com/protected
# Token-based authentication
$ curl -H "Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR..." https://api.example.com/me
# Save with a specific file name
$ curl -o file.zip https://example.com/archive.zip
# Save with the remote file name
$ curl -O https://example.com/archive.zip
# Hide download progress
$ curl -s -O https://example.com/archive.zip
# Resume an interrupted download
$ curl -C - -O https://example.com/large-file.zip
wget is a powerful tool specifically designed for file downloading and website mirroring. It automatically retries when connections drop and can run in the background.
# Download a file
$ wget https://example.com/file.tar.gz
# Save with a different name
$ wget -O custom-name.tar.gz https://example.com/file.tar.gz
# Download in the background
$ wget -b https://example.com/large-file.iso
# Progress is written to wget-log
# Quiet mode (no output)
$ wget -q https://example.com/file.tar.gz
# Resume an interrupted download
$ wget -c https://example.com/large-file.iso
# Limit download speed (100 KB/s)
$ wget --limit-rate=100k https://example.com/file.iso
# Download multiple files (from a list)
$ wget -i download-list.txt
# Recursive download
$ wget -r -l 2 https://example.com/docs/
# -r: recursive, -l 2: maximum 2 levels deep
# Mirror a website
$ wget --mirror --convert-links --adjust-extension --page-requisites --no-parent https://example.com/
# Download specific file types only
$ wget -r -A "*.pdf,*.doc" https://example.com/docs/
# Exclude specific file types
$ wget -r -R "*.jpg,*.png" https://example.com/
traceroute shows all the network nodes (hops) a packet passes through to reach its destination. It is used to identify where network problems occur.
# Basic traceroute
$ traceroute google.com
traceroute to google.com (142.250.185.206), 30 hops max, 60 byte packets
1 gateway (192.168.1.1) 1.234 ms 1.123 ms 1.045 ms
2 10.0.0.1 (10.0.0.1) 8.567 ms 8.234 ms 8.123 ms
3 * * *
4 142.250.185.206 (142.250.185.206) 12.345 ms 12.123 ms 12.045 ms
# On macOS
$ traceroute google.com
# Set maximum number of hops
$ traceroute -m 15 google.com
Used to view network connections, listening ports, and network statistics. On modern systems, ss replaces netstat.
# Show all listening ports
$ ss -tlnp
# -t: TCP, -l: listening, -n: numeric, -p: process info
# Show all active connections
$ ss -tan
# Check a specific port
$ ss -tlnp | grep :80
# Listening ports with netstat (older systems)
$ netstat -tlnp
# Network statistics
$ ss -s
These are DNS query tools. They are used to find the IP address of a domain name or the domain name of an IP address.
# DNS query with nslookup
$ nslookup google.com
Server: 8.8.8.8
Address: 8.8.8.8#53
Non-authoritative answer:
Name: google.com
Address: 142.250.185.206
# Detailed DNS query with dig
$ dig google.com
# Get only the IP address
$ dig +short google.com
# Query MX (mail) records
$ dig MX google.com
# Query NS (nameserver) records
$ dig NS google.com
# Reverse DNS query (IP → domain name)
$ dig -x 142.250.185.206
#!/bin/bash
# api-check.sh - Check API endpoints
ENDPOINTS=(
"https://api.example.com/health"
"https://api.example.com/users"
"https://api.example.com/products"
)
for url in "${ENDPOINTS[@]}"; do
HTTP_CODE=$(curl -o /dev/null -s -w "%{http_code}" "$url")
if [ "$HTTP_CODE" -eq 200 ]; then
echo "[OK] $url - HTTP $HTTP_CODE"
else
echo "[ERROR] $url - HTTP $HTTP_CODE"
fi
done
Linux network commands are an integral part of system administration and software development workflows. Testing connectivity with ping, sending API requests and transferring data with curl, downloading files and mirroring websites with wget, tracing network paths with traceroute, monitoring connections with ss/netstat, and performing DNS queries with dig/nslookup — learning these tools enables you to diagnose and resolve network issues quickly. By incorporating each of these commands into your daily workflow, you can continuously improve your network management skills.