|
|
#1 | ||||||||||||||||||||||||||||||||||||||
|
Major Leaguer
Join Date: Jan 2002
Posts: 311
|
Anyone who has read this post:
http://forums.macosxhints.com/showth...3955#post33955 knows that I've been learning how to use ImageMagick to post-process raw TIFF files after scanning. (I just enter one line of commands, followed one after another, and separated by semicolons so that they run sequentially.) It works great, but I like to run it at night and batch-process a whole bunch of scans while I sleep. The problem is, when I wake up I get errors about MacOSX not having enough memory and needing to clear out disk space!!! The fact of the matter is that I have over 2GB free on my drive! And I have 640MB RAM. But look at what my VM_stats are in the morning (I didn't check them at night):
That is a lot of pageouts! I was getting hardly any before. ImageMagick gives me this kind of error:
"convert.pbm" is an itermediate file I create before creating a PDF file. These can become as large as 10MB. My final PDF files are usually between 1 and 4 MB. So the question is, is there any way to get ImageMagick to behave better towards my memory? Or to clear out the VM cache between each command so that there is space for the next one? Or maybe I am wrong about where the problem is? On the ImageMagick list a Unix person suggested that it might be the size of the "TMPDIR" that is a problem. Would this be relevant to OS X? Last edited by kerim; 11-12-2002 at 09:59 AM. |
||||||||||||||||||||||||||||||||||||||
|
|
|
|
|
#2 |
|
Major Leaguer
Join Date: Jan 2002
Posts: 311
|
One possible solution
I realize that in order to save time I had been running "Convert" commands in two seperate terminal windows at the same time. Perhaps this overloaded the memory? Maybe if I stick to just running the commands sequentially it will help...
|
|
|
|
|
|
#3 |
|
League Commissioner
Join Date: Jan 2002
Posts: 5,536
|
yeah, i think you'll want to calm your script down a bit. two heavy processes going at it at once is often not better than them running serially.
i suspect that these processes exhaust real memory (with many inactive pages) which causes the memory manager to become paranoid and thrash and page and swap, creating new page files until the disk is full, then on process death, the swapfiles are released and mem mgr eventually cleans them up before you wake. or gremlins do it. what you might want to do is to interleave some sustained disk I/O inbetween your convert commands. it has been seen that a du / or a find / will peel off inactive pages to its scant smallest size. so try (and this is sh, not csh): du -sx / >/dev/null 2>&1 convert ... du -sx / >/dev/null 2>&1 convert ... etc... and see if that lets each convert perform without wild occilations in memory mgt
__________________
On a clear disk, you can seek forever. |
|
|
|
|
|
#4 |
|
League Commissioner
Join Date: Jan 2002
Posts: 5,536
|
Code:
nice +10 convert -gravity South -crop 1700x2200+0+0 -rotate "+90" -level 10000,1,50000 -unsharp 6x1+100+0.05 -adjoin *.tif pbm:convert.pbm ; convert -compress zip -page 792x612 convert.pbm pdf:document.pdf
__________________
On a clear disk, you can seek forever. |
|
|
|
|
|
#5 |
|
Major Leaguer
Join Date: Jan 2002
Posts: 311
|
Thanks.
Do you think I should use "nice" at all? If I'm running at night I don't really need to use the computer for anything else, and perhaps not using "nice" will use less VM? I'm beginning to realize that Aqua apps are about more than a pretty GUI! I've never had errors like this with Graphic Converter. (But unfortunately it produces messed up multi-page TIFF files ...) |
|
|
|
|
|
#6 | |||||||||||||||||||
|
Major Leaguer
Join Date: Jan 2002
Posts: 311
|
What do you mean by "sh" and "csh"? |
|||||||||||||||||||
|
|
|
|
|
#7 |
|
League Commissioner
Join Date: Jan 2002
Posts: 5,536
|
you never did two at a time with GC, either. and there may have been contributing problems. let's not go making sweeping generalizations
![]() nice isn't going to matter unless some other process needs cycles. there are two families of shells, sh (sh, bash, zsh, ksh) and csh (csh, tcsh, ?) they have different syntax for things like redirecting file descriptors (stdin, stdout, stderr) [ 2>&1 = sh, >& = csh ] in sh, command >/dev/null 2>&1 redirects the messages that would come to stderr (2, the terminal) and stdout (1, the terminal) to /dev/null making them vanish. we don't want to see the output or errors of du -sx, we just want its effect on inactive memory pages. anyhow, if you put your commands into a script, make it a sh script and not a csh script.
__________________
On a clear disk, you can seek forever. |
|
|
|
|
|
#8 |
|
Major Leaguer
Join Date: Jan 2002
Posts: 311
|
How would I do the same thing in the TCSH command line? I tried "du -sx / >/dev/null 2>&1 " but it gives me an error message: "Ambiguous output redirect."
I tried "du -sx / >/dev/null >& " but that didnt' work either. Said I was missing an output name. I tried "1" but it said it was ambiguous ... TiA! Last edited by kerim; 11-12-2002 at 12:19 PM. |
|
|
|
|
|
#9 |
|
League Commissioner
Join Date: Jan 2002
Posts: 5,536
|
c'mon! this is a hint site, not the magic answer wombat!
dig out yer tcsh manual and search for >& oh, alright, alright. here you go... command >& file
__________________
On a clear disk, you can seek forever. |
|
|
|
|
|
#10 |
|
Major Leaguer
Join Date: Jan 2002
Posts: 311
|
Thanks. If I type "man tcsh" I get pages and pages of text. Is there some way to quickly find instructions on use of a particular variable?
|
|
|
|
|
|
#11 |
|
League Commissioner
Join Date: Jan 2002
Posts: 5,536
|
type 'h' to learn how to use the pager.
gets tedious, doesn't it? wanna hire me? rent's due.
__________________
On a clear disk, you can seek forever. |
|
|
|
|
|
#12 |
|
Major Leaguer
Join Date: Jan 2002
Posts: 311
|
Still not working:
I enter: sudo du -sx / > /dev/null & test And it returns: [4] 863 Then it asks for my "password" at the prompt, in a weird way (what I type is visible and it doesn't do anything). ??? |
|
|
|
|
|
#13 |
|
Major Leaguer
Join Date: Jan 2002
Location: Adelaide, South Australia
Posts: 470
|
tcsh is a panus in the ain for redirection; as merv suggested, rewriting it as a simple sh script would endow your script with the ability to redirect stdout and stderror easily.
That said, here's some nonsense that I scrawled for myself a while back about how to redirect things using tcsh: merv's suggestion of "command >& filename" redirects *both* stdout and stderror to filename, but won't help if you only want stdout. Anyway, the following works for most purposes, but it really is ugly central. Sorry 'bout the messy notes, but they were for me (and I don't have time or the inclination for a rewrite just now!) Cheers, Paul Redirection is bloody painful in tcsh: >& redirects both, and if you try something like: grep prompt * | grep -v directory to get rid of the annoying "blah is a directory" messages it doesn't work. (I know you can simply add in a "-d skip" to the grep call to suppress directories, but let's just pretend that didn't exist.) Problem being that those error messages are in the stderror stream, so grep doesn't whack them as you might hope. Probably the easiest way around this problem is to pipe the results of an initial process to the stdin of grep, so that it doesn't differentiate stdin and stderror. Recall that > redirects stdout, >& also redirects stderror, and < redirects stdin. But you can't unhook what's in stderror from what's in stdout easily. ( grep prompt * ) | & grep -v directory Hooray! but far from elegant. Similarly for find, with its permission denied errors. Suppose that file1 and file2 exist, but not file3. You try to get a listing, but get the error message displayed also: % ls -l file{1,2,3} ls: file3: No such file or directory -rw-r--r-- 1 pmccann staff 0 Jan 29 15:58 file1 -rw-r--r-- 1 pmccann staff 0 Jan 29 15:58 file2 % (ls -l file{1,2,3} > /tmp/pmccann ) > & /dev/null |
|
|
|
|
|
#14 |
|
Major Leaguer
Join Date: Jan 2002
Posts: 311
|
Update
For one thing, running two commands at the same time is not the problem. It stopped last night for the same reason even though I was only running one task at a time!
I think ImageMagick just chokes on some folders with lots of files. I may need to restructure my command to first do the conversions before doing the Adjoin command. This might take up less space. I think the "-unsharp" command is probably the one that takes up the most memory, and not having it write the file to disk before it handles 60 other images is probably too much. I should create another intermediate file for each scan and then merge them afterwards. (I'll try this again.) But there is also the possibility of clearing out my cache files using the "sudo periodic weekly" command. However, here I run into an issue that it prompts me for a password when I do this and i need it to run automatically when I am asleep! Also, this command takes a *long* time to run ... Any ideas? |
|
|
|
|
|
#15 |
|
League Commissioner
Join Date: Jan 2002
Posts: 5,536
|
the suggestion to run the weekly is merely born out of the effect of the commands contained within that perform sustained disk I/O which expire some inactive memory pages and release them to the free memory pool.
that command is find / # <- everywhere and my suggestion was to use du -sx / as it can be lighter/faster the redirection of stdout and stderr to /dev/null was because du is going to bang into files you don't have permissions to stat, and will spit up error messages. that is, we don't want to see anything from du, just its effects on memory. so, for tcsh, try: Code:
% vm_stat | egrep free\|active\|wired Pages free: 210490. Pages active: 22334. Pages inactive: 130497. Pages wired down: 29895. % du -sx / > & /dev/null % vm_stat | egrep free\|active\|wired Pages free: 218266. Pages active: 22140. Pages inactive: 118223. Pages wired down: 34587.
__________________
On a clear disk, you can seek forever. |
|
|
|
|
|
#16 |
|
Major Leaguer
Join Date: Jan 2002
Posts: 311
|
How do you calculate 32MB from those stats?
|
|
|
|
|
|
#17 |
|
League Commissioner
Join Date: Jan 2002
Posts: 5,536
|
well, if you look at vm_stat output, it tells you the units are in 4096 byte pages...
Code:
$ vm_stat Mach Virtual Memory Statistics: (page size of 4096 bytes) Pages free: 230023. ... the net freepage gain of the above du: 8*4=32
__________________
On a clear disk, you can seek forever. |
|
|
|
|
|
#18 |
|
Major Leaguer
Join Date: Jan 2002
Posts: 311
|
Well, this command worked for me!
But it didn't go much faster than doing the "sudo periodic weekly" Still, the results seem to have done something, even though I just ran "sudo periodic weekly", so maybe it works better? here are the stats: [QUOTE][ kerim% vm_stat | egrep free\|active\|wired Pages free: 82674. Pages active: 36245. Pages inactive: 30037. Pages wired down: 14883. kerim% du -sx / > & /dev/null kerim% vm_stat | egrep free\|active\|wired Pages free: 89681. Pages active: 35741. Pages inactive: 23540. Pages wired down: 14878. /QUOTE] How many MB would you make that? |
|
|
|
|
|
#19 |
|
Major Leaguer
Join Date: Jan 2002
Posts: 311
|
So 7x4= 28MB freed for me, even after running the weekly script! Not bad!
|
|
|
|
|
|
#20 |
|
League Commissioner
Join Date: Jan 2002
Posts: 5,536
|
free before: 82k
free after: 89k net: 7k * 4k = 28 MB -- if you have one large partition, you can speed the du by confining it to a smaller hierarchy... du -s /System and if that's not enough to strip the inactive pages, add /usr du -s /System /usr |
|
|
|
![]() |
|
|