π Wget - Complete Guide for Termux (2025)
Table of Contents
- About Wget
- Basic Usage
- Essential Daily Commands
- Advanced Features
- Website Mirroring
- Configuration
- Troubleshooting
- Practical Examples
About Wget
GNU Wget is a free utility for non-interactive download of files from the Web. It supports HTTP, HTTPS, and FTP protocols, as well as retrieval through HTTP proxies.
- Official Website: https://www.gnu.org/software/wget/
- Version: 1.21.4+ (latest as of 2025)
- License: GPL-3.0+
- Maintainer: GNU Project
Key Features
- Non-interactive (runs in background)
- Recursive downloading (entire websites)
- Resume capability for interrupted downloads
- Bandwidth throttling
- Timestamping (download only newer files)
- Cookie support
- Proxy support
- SSL/TLS support
Wget vs Alternatives
| Feature | Wget | cURL | aria2 |
|---|---|---|---|
| Recursive download | β | β | β |
| Mirror websites | β | β | β |
| Multiple connections | β | β | β |
| FTP recursion | β | β | β |
| Non-interactive | β | β | β |
Basic Usage
Simple Download
# Download a single file
wget https://example.com/file.zip
# Download with custom filename
wget -O myfile.zip https://example.com/file.zip
# Download to specific directory
wget -P /sdcard/Download https://example.com/file.zip
Check wget version
wget --version
Essential Daily Commands
1. Simple File Download
# Basic download
wget https://example.com/file.zip
# Download and save with different name
wget -O newname.zip https://example.com/file.zip
# Download to specific directory
wget -P /sdcard/Download https://example.com/file.zip
# -O: output filename
# -P: directory prefix
Example:
wget -P /sdcard/Download https://releases.ubuntu.com/22.04/ubuntu-22.04-desktop-amd64.iso
2. Resume Interrupted Downloads
# Continue partially downloaded file
wget -c https://example.com/largefile.iso
# -c or --continue: resume getting partially-downloaded file
Example:
# First attempt (interrupted)
wget https://example.com/largefile.zip
# Resume later (same directory)
wget -c https://example.com/largefile.zip
Important: The file must be in the same directory where you run the command.
3. Download Multiple Files
# From a text file (one URL per line)
wget -i urls.txt
# With custom directory
wget -i urls.txt -P /sdcard/Download
# -i: input file with URLs
Example urls.txt:
https://example.com/file1.zip
https://example.com/file2.mp4
https://example.com/file3.pdf
Command:
wget -i urls.txt -P /sdcard/Download
4. Limit Download Speed
# Limit to 1MB/s
wget --limit-rate=1M https://example.com/file.zip
# Limit to 500KB/s
wget --limit-rate=500K https://example.com/file.zip
# Limit to 100KB/s for mobile data
wget --limit-rate=100K https://example.com/file.zip
Example:
# Download in background with speed limit
wget --limit-rate=500K -b https://example.com/largefile.iso
5. Background Download
# Download in background
wget -b https://example.com/file.zip
# Background with log file
wget -b -o download.log https://example.com/file.zip
# Check background download status
tail -f wget-log
# or
tail -f download.log
# -b: go to background immediately after startup
# -o: log messages to FILE
Example:
wget -b -o download.log --limit-rate=1M -c https://example.com/largefile.iso
6. Download with Authentication
# HTTP Basic Authentication
wget --user=username --password=password https://example.com/file.zip
# Or using URL format
wget https://username:password@example.com/file.zip
# FTP Authentication
wget --ftp-user=username --ftp-password=password ftp://ftp.example.com/file.zip
# Read password from console (more secure)
wget --user=username --ask-password https://example.com/file.zip
Example:
wget --user=john --password=secret123 https://private.example.com/data.zip
7. Download with Retry Options
# Retry 10 times if download fails
wget --tries=10 https://example.com/file.zip
# Retry infinitely
wget --tries=inf https://example.com/file.zip
# Wait 30 seconds between retries
wget --waitretry=30 --tries=10 https://example.com/file.zip
# Timeout after 60 seconds
wget --timeout=60 https://example.com/file.zip
Example:
# Download from unreliable server
wget --tries=20 --waitretry=10 --timeout=60 -c https://unstable.example.com/file.zip
8. Download Entire Website (Mirror)
# Basic mirror
wget --mirror https://example.com
# Mirror with options
wget --mirror --convert-links --adjust-extension --page-requisites --no-parent https://example.com
# Short version
wget -mkEpnp https://example.com
# -m: mirror (recursive with infinite depth)
# -k: convert links for local viewing
# -E: adjust extension (.html)
# -p: download all page requisites (images, CSS, etc.)
# -np: no parent (don't ascend to parent directory)
Example:
wget -mkEpnp -P /sdcard/Websites https://example.com
9. Download Specific File Types
# Download only PDF files
wget -r -A pdf https://example.com
# Download only images
wget -r -A jpg,jpeg,png,gif https://example.com
# Exclude certain file types
wget -r -R mp4,avi,mkv https://example.com
# -r: recursive
# -A: accept list (comma-separated)
# -R: reject list (comma-separated)
Example:
# Download all PDF files from a website
wget -r -l 2 -A pdf -np https://example.com/documents/
10. Quiet and Verbose Modes
# Quiet mode (no output)
wget -q https://example.com/file.zip
# Verbose mode (detailed output)
wget -v https://example.com/file.zip
# Debug mode (extremely verbose)
wget -d https://example.com/file.zip
# Show progress bar only
wget --progress=bar https://example.com/file.zip
# Show progress as dot
wget --progress=dot https://example.com/file.zip
Example:
# Silent download for scripts
wget -q -O /tmp/data.json https://api.example.com/data
11. User Agent Spoofing
# Set custom user agent
wget --user-agent="Mozilla/5.0" https://example.com/file.zip
# Mimic Chrome browser
wget --user-agent="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36" https://example.com
# Short version
wget -U "Mozilla/5.0" https://example.com/file.zip
Example:
# Download from site that blocks wget
wget -U "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36" https://example.com/file.zip
12. Download with Timestamp Check
# Download only if newer than local file
wget -N https://example.com/file.zip
# Timestamping (useful for syncing)
wget -N -r https://example.com/updates/
# -N: timestamping (don't re-retrieve unless newer)
Example:
# Daily sync of files
wget -N -r -np -l 1 https://example.com/daily-updates/
13. Download with Cookies
# Save cookies to file
wget --save-cookies cookies.txt --keep-session-cookies https://example.com
# Load cookies from file
wget --load-cookies cookies.txt https://example.com/restricted-page.html
# Both save and load
wget --load-cookies cookies.txt --save-cookies cookies.txt --keep-session-cookies https://example.com
Example:
# Login and download protected content
wget --save-cookies cookies.txt --post-data="user=john&pass=secret" https://example.com/login
wget --load-cookies cookies.txt https://example.com/protected/file.zip
14. Download via Proxy
# HTTP Proxy
wget -e use_proxy=yes -e http_proxy=proxy.example.com:8080 https://example.com/file.zip
# HTTPS Proxy
wget -e https_proxy=proxy.example.com:8080 https://example.com/file.zip
# SOCKS Proxy (requires environment variable)
export ALL_PROXY=socks5://127.0.0.1:1080
wget https://example.com/file.zip
# No proxy for certain domains
wget --no-proxy https://localhost/file.zip
Example:
wget -e use_proxy=yes -e http_proxy=192.168.1.1:8080 https://example.com/file.zip
15. Check if File Exists (Spider Mode)
# Check without downloading
wget --spider https://example.com/file.zip
# Check and show only errors
wget --spider -q https://example.com/file.zip
echo $? # 0 if file exists, non-zero if error
# Check multiple URLs
wget --spider -i urls.txt
Example:
# Script to verify URLs
#!/data/data/com.termux/files/usr/bin/bash
while read url; do
if wget --spider -q "$url"; then
echo "β $url exists"
else
echo "β $url not found"
fi
done < urls.txt
Advanced Features
Recursive Download Options
# Download with depth limit
wget -r -l 3 https://example.com
# -l: maximum recursion depth (0 = infinite)
# Download only from specific domains
wget -r -D example.com,cdn.example.com https://example.com
# Exclude specific directories
wget -r -X /cgi-bin/,/temp/ https://example.com
# Follow only relative links
wget -r --relative https://example.com
# Follow FTP links from HTML
wget -r --follow-ftp https://example.com
Advanced Download Control
# Limit download size
wget --quota=100M -i urls.txt
# Download files smaller than 10MB
wget -r --max-filesize=10M https://example.com
# Skip files larger than 50MB
wget -r --reject-size=50M https://example.com
# Random wait between downloads (0.5 to 1.5 seconds)
wget -r --random-wait --wait=1 https://example.com
# Limit connection rate
wget --limit-rate=200K --wait=2 -r https://example.com
SSL/TLS Options
# Ignore SSL certificate errors (use cautiously)
wget --no-check-certificate https://example.com/file.zip
# Specify CA certificate
wget --ca-certificate=/path/to/cert.pem https://example.com/file.zip
# Use specific TLS version
wget --secure-protocol=TLSv1_3 https://example.com/file.zip
# Show SSL certificate info
wget --verbose --debug https://example.com 2>&1 | grep -i ssl
FTP-Specific Options
# Passive FTP (default)
wget ftp://ftp.example.com/file.zip
# Active FTP
wget --no-passive-ftp ftp://ftp.example.com/file.zip
# List FTP directory
wget --spider -r ftp://ftp.example.com/pub/
# Download entire FTP directory
wget -r -np ftp://username:password@ftp.example.com/directory/
# Preserve FTP directory structure
wget -r -nH --cut-dirs=2 ftp://ftp.example.com/pub/linux/
Website Mirroring
Complete Website Download
# Full mirror with all options
wget --mirror \
--convert-links \
--adjust-extension \
--page-requisites \
--no-parent \
--wait=1 \
--random-wait \
--limit-rate=500K \
--user-agent="Mozilla/5.0" \
-P /sdcard/Websites \
https://example.com
# Short version
wget -mkEpnp -w 1 --random-wait --limit-rate=500K -U "Mozilla/5.0" -P /sdcard/Websites https://example.com
Explanation:
-m/--mirror: Turns on recursion and timestamping-k/--convert-links: Convert links for offline viewing-E/--adjust-extension: Save HTML files with .html extension-p/--page-requisites: Download images, CSS, etc.-np/--no-parent: Don’t ascend to parent directory-w 1: Wait 1 second between requests--random-wait: Random wait (0.5-1.5x specified wait)--limit-rate: Don’t overload server
Mirror Specific Sections
# Mirror only /docs/ section
wget -mkEpnp -np https://example.com/docs/
# Mirror with depth limit
wget -mkEpnp -l 3 -np https://example.com
# Mirror excluding certain paths
wget -mkEp -np -X /admin/,/private/ https://example.com
# Mirror only HTML and images
wget -r -A html,htm,css,png,jpg,jpeg,gif -np https://example.com
Update Existing Mirror
# Only download newer files
wget -N -r -np https://example.com
# Continue interrupted mirror
wget -c -r -np https://example.com
# Update mirror (timestamping + continue)
wget -Nc -r -np https://example.com
Configuration
Using Configuration File
Create config file: ~/.wgetrc
# Create config
nano ~/.wgetrc
Sample Configuration:
# === BASIC OPTIONS ===
# Default download directory
dir_prefix = /sdcard/Download
# Continue interrupted downloads
continue = on
# Retry settings
tries = 10
waitretry = 10
timeout = 60
# === SPEED ===
# Limit download rate (0 = unlimited)
limit_rate = 0
# Wait between downloads
wait = 1
random_wait = on
# === USER AGENT ===
user_agent = Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36
# === RECURSIVE ===
# Default recursion depth
reclevel = 5
# Follow relative links only
relative_only = on
# === OUTPUT ===
# Verbose output
verbose = on
# Show progress bar
progress = bar
# === HTTP ===
# Keep session cookies
keep_session_cookies = on
# Ignore robots.txt (use responsibly)
robots = off
# === SSL ===
# Check certificates
check_certificate = on
# SSL protocol
secure_protocol = auto
# === FTP ===
# Use passive FTP
passive_ftp = on
Use custom config:
wget --config=/path/to/custom.wgetrc https://example.com/file.zip
Environment Variables
# Set HTTP proxy
export http_proxy=http://proxy.example.com:8080
export https_proxy=http://proxy.example.com:8080
# Set FTP proxy
export ftp_proxy=http://proxy.example.com:8080
# No proxy for certain domains
export no_proxy=localhost,127.0.0.1,.local
# Use SOCKS proxy
export ALL_PROXY=socks5://127.0.0.1:1080
# Then use wget normally
wget https://example.com/file.zip
Practical Examples
Example 1: Download Linux ISO with Resume
wget -c \
--progress=bar:force \
--tries=0 \
--timeout=60 \
-O ubuntu.iso \
https://releases.ubuntu.com/22.04/ubuntu-22.04-desktop-amd64.iso
Explanation:
-c: Resume if interrupted--progress=bar:force: Show progress bar even in non-TTY--tries=0: Retry infinitely--timeout=60: Timeout after 60 seconds-O: Save as ubuntu.iso
Example 2: Download All PDFs from a Website
wget -r \
-l 2 \
-A pdf \
-np \
--random-wait \
--limit-rate=500K \
-P /sdcard/Download/PDFs \
https://example.com/documents/
Explanation:
-r -l 2: Recursive with depth 2-A pdf: Accept only PDF files-np: Don’t go to parent directory--random-wait: Be polite to server--limit-rate=500K: Limit speed
Example 3: Clone GitHub Repository (Alternative to git)
wget -r \
-np \
-nH \
--cut-dirs=2 \
-R "index.html*" \
-P /sdcard/repos/ \
https://github.com/user/repo/archive/main.zip
# Or download release
wget https://github.com/user/repo/archive/refs/tags/v1.0.0.tar.gz
Example 4: Download Video with Resume and Logging
wget -c \
-b \
-o video-download.log \
--limit-rate=2M \
--tries=0 \
-P /sdcard/Videos \
https://example.com/video.mp4
# Check progress
tail -f video-download.log
Example 5: Download Multiple Files with Different Names
# Create download script
cat > download.sh << 'EOF'
#!/data/data/com.termux/files/usr/bin/bash
wget -O file1.zip https://example.com/abc123
wget -O file2.zip https://example.com/def456
wget -O file3.zip https://example.com/ghi789
EOF
chmod +x download.sh
./download.sh
Example 6: Schedule Download for Night Time
# Using at command
echo "wget -c --limit-rate=5M -P /sdcard/Download https://example.com/largefile.iso" | at 02:00
# Or create cron job
crontab -e
# Add: 0 2 * * * wget -c -P /sdcard/Download https://example.com/largefile.iso
Example 7: Download with Login (Form Authentication)
# Step 1: Save cookies from login
wget --save-cookies cookies.txt \
--keep-session-cookies \
--post-data "username=john&password=secret&submit=Login" \
https://example.com/login
# Step 2: Download protected file
wget --load-cookies cookies.txt \
https://example.com/protected/file.zip
# Step 3: Cleanup
rm cookies.txt
Example 8: Archive Entire Website for Offline Viewing
wget --mirror \
--convert-links \
--adjust-extension \
--page-requisites \
--no-parent \
--wait=2 \
--random-wait \
--user-agent="Mozilla/5.0" \
--reject="*.exe,*.zip,*.tar.gz" \
-P /sdcard/Archive \
https://example.com
# Open in browser: file:///sdcard/Archive/example.com/index.html
Example 9: Download Files Matching Pattern
# Download all files matching pattern
wget -r -np -nd -A "*2024*.pdf" https://example.com/reports/
# -nd: no directories (flatten structure)
Example 10: Monitor Website for Changes
# Create monitoring script
cat > monitor.sh << 'EOF'
#!/data/data/com.termux/files/usr/bin/bash
URL="https://example.com/page.html"
FILE="page-$(date +%Y%m%d).html"
wget -q -O "$FILE" "$URL"
if [ -f "page-previous.html" ]; then
if ! diff -q "$FILE" "page-previous.html" > /dev/null; then
echo "Page changed!"
termux-notification --title "Website Changed" --content "$URL"
fi
fi
cp "$FILE" "page-previous.html"
EOF
chmod +x monitor.sh
# Run every hour
crontab -e
# Add: 0 * * * * /path/to/monitor.sh
Important Options Reference
| Option | Description | Example |
|---|---|---|
-O FILE |
Save as FILE | -O myfile.zip |
-P DIR |
Save to directory | -P /sdcard/Download |
-c |
Continue/resume download | -c |
-b |
Background download | -b |
-q |
Quiet mode | -q |
-v |
Verbose mode | -v |
-i FILE |
Input file with URLs | -i urls.txt |
-r |
Recursive download | -r |
-l N |
Recursion depth | -l 3 |
-N |
Timestamping | -N |
-k |
Convert links | -k |
-p |
Page requisites | -p |
-np |
No parent | -np |
-A LIST |
Accept file types | -A pdf,zip |
-R LIST |
Reject file types | -R exe,dmg |
--limit-rate=RATE |
Limit speed | --limit-rate=1M |
--user=USER |
HTTP username | --user=john |
--password=PASS |
HTTP password | --password=secret |
--tries=N |
Number of retries | --tries=10 |
--timeout=SEC |
Timeout in seconds | --timeout=60 |
--wait=SEC |
Wait between downloads | --wait=2 |
--random-wait |
Random wait | --random-wait |
--user-agent=AGENT |
Set user agent | --user-agent="Mozilla/5.0" |
--spider |
Check if file exists | --spider |
--no-check-certificate |
Ignore SSL errors | --no-check-certificate |
--mirror |
Mirror website | --mirror (= -r -N -l inf) |
Useful Command Combinations
Fast Download with All Safety Features
wget -c --tries=0 --timeout=60 --waitretry=10 -P /sdcard/Download URL
Mirror Website (Complete Package)
wget -mkEpnp -w 2 --random-wait --limit-rate=500K -U "Mozilla/5.0" URL
Background Download with Progress Tracking
wget -cb -o download.log --limit-rate=2M URL
tail -f download.log
Download Multiple Files Efficiently
wget -i urls.txt -c -P /sdcard/Download --wait=1 --random-wait
Download with Full Error Handling
wget -c --tries=20 --retry-connrefused --waitretry=10 --read-timeout=60 --dns-timeout=60 URL
Troubleshooting
Issue 1: “Cannot write to ‘file.zip’ (Permission denied)”
Solution: Check directory permissions
# In Termux, request storage permission
termux-setup-storage
# Use /sdcard/Download
wget -P /sdcard/Download URL
Issue 2: “ERROR 403: Forbidden”
Solution: Server blocks wget user agent
# Use browser user agent
wget -U "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36" URL
Issue 3: “ERROR 404: Not Found”
Solution: Check URL with spider mode
# Verify URL exists
wget --spider URL
# Check for redirects
wget --spider --max-redirect=5 -S URL
Issue 4: “Unable to establish SSL connection”
Solution: Check certificate or skip verification
# Skip certificate check (use cautiously)
wget --no-check-certificate URL
# Or update CA certificates
pkg update ca-certificates
Issue 5: Download Speed Too Slow
Solution: Check if you’re being throttled
# Try different user agent
wget -U "Mozilla/5.0" URL
# Remove rate limiting from config
wget --limit-rate=0 URL
# Use multiple connections (not wget's forte, use aria2)
Issue 6: “Connection timed out”
Solution: Increase timeout and retries
wget --timeout=120 --tries=20 --waitretry=30 -c URL
Issue 7: Downloaded File is 0 bytes
Solution: Check if authentication is required
# Check with spider mode
wget --spider -S URL
# May need login
wget --save-cookies cookies.txt --post-data="user=X&pass=Y" LOGIN_URL
wget --load-cookies cookies.txt URL
Issue 8: Recursive Download Gets Too Much
Solution: Limit depth and domains
# Limit to depth 2
wget -r -l 2 -np URL
# Stay on same domain
wget -r -D example.com URL
# Exclude certain directories
wget -r -X /ads/,/temp/ URL
Best Practices
1. Always Use Resume Flag for Large Files
wget -c LARGE_FILE_URL
2. Be Polite to Servers
# Add delays and random waits
wget -r --wait=2 --random-wait --limit-rate=500K URL
3. Use Background Mode for Long Downloads
wget -bc -o download.log LARGE_FILE_URL
4. Save Bandwidth with Timestamping
# Only download if newer
wget -N URL
5. Use Config File for Consistent Settings
# Edit ~/.wgetrc once, benefit always
nano ~/.wgetrc
6. Verify Downloads
# Download checksum file
wget https://example.com/file.zip
wget https://example.com/file.zip.sha256
# Verify
sha256sum -c file.zip.sha256
7. Log Everything for Debugging
wget -o download.log -d URL # -d for debug mode
8. Use Appropriate User Agents
# Don't pretend to be a browser if you're not
# But use modern UA if site blocks old clients
wget -U "Mozilla/5.0" URL
Useful Scripts
Script 1: Batch Downloader with Progress
#!/data/data/com.termux/files/usr/bin/bash
# save as: batch-download.sh
URLS_FILE="$1"
DOWNLOAD_DIR="${2:-/sdcard/Download}"
if [ ! -f "$URLS_FILE" ]; then
echo "Usage: $0 <urls-file> [download-dir]"
exit 1
fi
mkdir -p "$DOWNLOAD_DIR"
TOTAL=$(wc -l < "$URLS_FILE")
COUNT=0
while IFS= read -r url; do
COUNT=$((COUNT + 1))
echo "[$COUNT/$TOTAL] Downloading: $url"
wget -c -P "$DOWNLOAD_DIR" \
--progress=bar:force \
--tries=5 \
"$url"
if [ $? -eq 0 ]; then
echo "β Success"
else
echo "β Failed"
echo "$url" >> failed-downloads.txt
fi
sleep 2
done < "$URLS_FILE"
echo "Download complete!"
if [ -f failed-downloads.txt ]; then
echo "Some downloads failed. Check failed-downloads.txt"
fi
Usage:
chmod +x batch-download.sh
./batch-download.sh urls.txt /sdcard/Download
Script 2: Smart Resume Script
#!/data/data/com.termux/files/usr/bin/bash
# save as: smart-resume.sh
# Find all .wget files (partial downloads) and resume them
find . -name "*.wget" | while read file; do
original="${file%.wget}"
echo "Resuming: $original"
# Try to resume (assuming URL is in a list or history)
# This is a simplified example
wget -c "$original"
done
Script 3: Website Change Monitor
#!/data/data/com.termux/files/usr/bin/bash
# save as: monitor-site.sh
URL="$1"
HASH_FILE=".site-hash"
if [ -z "$URL" ]; then
echo "Usage: $0 <url>"
exit 1
fi
# Download page
wget -q -O page.html "$URL"
# Calculate hash
NEW_HASH=$(md5sum page.html | awk '{print $1}')
# Compare with previous
if [ -f "$HASH_FILE" ]; then
OLD_HASH=$(cat "$HASH_FILE")
if [ "$NEW_HASH" != "$OLD_HASH" ]; then
echo "Page changed!"
termux-notification \
--title "Website Updated" \
--content "$URL has changed"
else
echo "No changes"
fi
fi
# Save new hash
echo "$NEW_HASH" > "$HASH_FILE"
rm page.html
Usage with cron:
chmod +x monitor-site.sh
# Run every hour
crontab -e
# Add: 0 * * * * /path/to/monitor-site.sh https://example.com/page.html
Script 4: Download Queue Manager
#!/data/data/com.termux/files/usr/bin/bash
# save as: queue-manager.sh
QUEUE_FILE="$HOME/.wget-queue"
DOWNLOAD_DIR="/sdcard/Download"
case "$1" in
add)
echo "$2" >> "$QUEUE_FILE"
echo "Added to queue: $2"
;;
list)
if [ -f "$QUEUE_FILE" ]; then
cat -n "$QUEUE_FILE"
else
echo "Queue is empty"
fi
;;
start)
if [ ! -f "$QUEUE_FILE" ]; then
echo "Queue is empty"
exit 1
fi
while IFS= read -r url; do
echo "Downloading: $url"
wget -c -P "$DOWNLOAD_DIR" "$url"
sleep 2
done < "$QUEUE_FILE"
rm "$QUEUE_FILE"
echo "Queue completed!"
;;
clear)
rm -f "$QUEUE_FILE"
echo "Queue cleared"
;;
*)
echo "Usage: $0 {add|list|start|clear} [url]"
echo " add URL - Add URL to queue"
echo " list - Show queue"
echo " start - Start downloading queue"
echo " clear - Clear queue"
;;
esac
Usage:
chmod +x queue-manager.sh
# Add URLs
./queue-manager.sh add https://example.com/file1.zip
./queue-manager.sh add https://example.com/file2.zip
# List queue
./queue-manager.sh list
# Start downloading
./queue-manager.sh start
Integration with Other Tools
With yt-dlp (YouTube Downloads)
# Install yt-dlp
pkg install yt-dlp
# Use wget as external downloader
yt-dlp --external-downloader wget \
--external-downloader-args "--limit-rate=1M -c" \
"YOUTUBE_URL"
With Termux:API (Notifications)
# Install termux-api
pkg install termux-api
# Download with notification on complete
wget -bc -o download.log URL && \
termux-notification \
--title "Download Complete" \
--content "$(basename URL) finished" \
--sound
With grep/awk (Extract URLs)
# Extract all download links from a page
wget -qO- https://example.com/downloads | \
grep -oP 'href="\K[^"]*\.zip' | \
wget -i -
# Or with awk
wget -qO- https://example.com | \
awk -F'"' '/\.pdf/{print $2}' | \
wget -i - -P /sdcard/PDFs
With find (Resume All Downloads)
# Resume all partial downloads in directory
find /sdcard/Download -name "*.tmp" -o -name "*.part" | \
while read file; do
wget -c "$file"
done
Performance Optimization
For Slow Connections
# Lower timeout, more retries
wget --timeout=30 --tries=50 --retry-connrefused -c URL
For Fast Connections
# Reduce wait times
wget -r --wait=0.5 --limit-rate=0 URL
For Unreliable Connections
# Maximum patience
wget -c \
--tries=0 \
--retry-connrefused \
--waitretry=30 \
--read-timeout=60 \
--dns-timeout=60 \
URL
For Limited Storage
# Download only small files
wget -r --max-filesize=10M URL
# Set quota
wget --quota=500M -i urls.txt
Comparison: When to Use What
Use wget when:
- β Downloading entire websites
- β Need recursive downloads
- β Mirroring websites
- β FTP recursive downloads
- β Non-interactive/background downloads
- β Timestamping (only download newer files)
- β Following links in HTML
Use aria2 when:
- β Need multiple connections per file
- β Maximum download speed is priority
- β Downloading torrents
- β Need RPC interface
- β Resuming complex multi-part downloads
Use curl when:
- β Testing APIs
- β Need to send data (POST, PUT, etc.)
- β Working with REST APIs
- β Need detailed protocol control
- β Testing HTTP headers
Advanced Use Cases
Use Case 1: Create Offline Documentation
# Download entire documentation site
wget --mirror \
--convert-links \
--adjust-extension \
--page-requisites \
--no-parent \
--wait=1 \
--limit-rate=500K \
-P /sdcard/Docs \
https://docs.example.com/
# Zip for sharing
cd /sdcard/Docs
zip -r docs-offline.zip docs.example.com/
Use Case 2: Download Academic Papers
# Download all PDFs from a research portal
wget -r -l 2 \
-A pdf \
-np \
--random-wait \
--wait=3 \
-P /sdcard/Papers \
https://university.edu/research/
# Organize by date
cd /sdcard/Papers
find . -name "*.pdf" -exec mv {} "$(date +%Y-%m)/" \;
Use Case 3: Backup Website Content
# Daily backup script
#!/data/data/com.termux/files/usr/bin/bash
DATE=$(date +%Y%m%d)
BACKUP_DIR="/sdcard/Backups/website-$DATE"
wget --mirror \
--convert-links \
--page-requisites \
--no-parent \
-P "$BACKUP_DIR" \
https://mywebsite.com
# Compress backup
tar -czf "website-backup-$DATE.tar.gz" "$BACKUP_DIR"
# Keep only last 7 backups
find /sdcard/Backups -name "website-backup-*.tar.gz" -mtime +7 -delete
Use Case 4: Download Software Updates
# Check and download only if newer
wget -N \
--no-if-modified-since \
-P /sdcard/Updates \
https://example.com/software/latest.apk
# Verify checksum
cd /sdcard/Updates
wget -N https://example.com/software/latest.apk.sha256
sha256sum -c latest.apk.sha256
Use Case 5: Media Collection Downloader
# Download image gallery
wget -r -l 1 \
-H \
-A jpg,jpeg,png,gif \
-nd \
-P /sdcard/Pictures/Gallery \
https://example.com/gallery/
# Remove duplicates
cd /sdcard/Pictures/Gallery
fdupes -rdN .
Quick Reference Card
# BASIC
wget URL # Simple download
wget -O name.zip URL # Save with custom name
wget -P /path URL # Save to directory
wget -c URL # Resume download
# MULTIPLE FILES
wget -i urls.txt # Download from list
wget URL1 URL2 URL3 # Multiple URLs
# RECURSIVE
wget -r URL # Recursive download
wget -r -l 2 URL # Depth 2
wget -r -A pdf URL # Only PDFs
wget -r -R exe URL # Exclude executables
# MIRROR
wget -m URL # Mirror (= -r -N -l inf -nr)
wget -mkEpnp URL # Complete mirror package
# SPEED CONTROL
wget --limit-rate=1M URL # Limit to 1MB/s
wget --wait=2 -r URL # Wait 2 seconds between files
# BACKGROUND
wget -b URL # Background download
wget -bc URL # Background + continue
# AUTHENTICATION
wget --user=X --password=Y URL # HTTP auth
wget --load-cookies cookies.txt # Use cookies
# ADVANCED
wget --spider URL # Check if exists
wget -N URL # Timestamping
wget -U "Mozilla/5.0" URL # Custom user agent
wget --tries=10 URL # Retry 10 times
Sources & Citations
This guide was compiled from:
-
Official GNU Wget Manual
- https://www.gnu.org/software/wget/manual/wget.html
- Complete reference for all options and features
-
Wget Man Pages
- https://linux.die.net/man/1/wget
- Command-line option reference
-
GNU Project Documentation
- https://www.gnu.org/software/wget/
- Official project page and downloads
-
Wget2 Development
- https://gitlab.com/gnuwget/wget2
- Next-generation wget features
-
Termux Wiki
- https://wiki.termux.com/
- Termux-specific usage and examples
-
Community Best Practices
- Various guides and tutorials from 2024-2025
- Real-world use cases and optimization techniques
Last Updated: 2025
Wget Version: 1.21.4+
Tested on: Termux (Android)
Status: Production Ready
This comprehensive guide covers everything you need to master wget in Termux as of 2025. Save this markdown file for quick reference!