Title: Disk Full on /tmp Location in Linux
Category: Troubleshooting
Applies To: Linux 9.4(Linux version)
Last Updated: 23/06/2025
Issue Summary:
The /tmp directory has consumed all available disk space, causing application failures, service crashes, or system instability. This can impact Hadoop daemons (e.g., NameNode, DataNode), YARN jobs, or general system operations.
Typical errors:
No space left on device
Hadoop job/application fails with temp file creation error
Services fail to start or crash due to lack of temp space
Possible Causes:
Large temporary files not cleaned Applications or scripts create large files under /tmp and don’t remove them.
Zombie or orphaned Hadoop job files Hadoop or Spark may leave temporary directories in /tmp after job failure.
Users writing heavy data to /tmp Users running scripts or data pipelines that write gigabytes to /tmp.
Step-by-Step Resolution:
Step 1: Check Disk Usage
df -h /tmp
Example output:
Filesystem Size Used Avail Use% Mounted on /dev/sda1 40G 39G 0.5G 99% /
Step 2: Identify Large Files
du -sh /tmp/*
or to find top space hogs
find /tmp -type f -exec du -sh {} + | sort -rh | head -20
Also check hidden files:
du -sh /tmp/.*
Step 3: Safely Remove Unused Files
Remove old or unnecessary files (only if sure):
rm -r /tmp/hsperfdata_*
rm -r /tmp/tmp.*
rm -r /tmp/hadoop-*
rm -r /tmp/spark-*
You can also clean files older than X days:
find /tmp -type f -mtime +2 -exec rm -f {} ;
Step 4: Stop Services (If Needed) Before Cleanup
To safely delete active app temp files:
stop-dfs.sh
stop-yarn.sh
rm -r /tmp/hadoop-*
start-dfs.sh
start-yarn.sh
Step 5: Monitor Live Usage (Optional)
While cleaning:
watch -n 1 df -h /tmp
Additional Notes:
For mission-critical clusters, use a separate volume or partition for /tmp.
Use lsof | grep /tmp to check which processes are locking large temp files.
Always verify files are not in use before deleting.