![]() |
Pegging the Processor
Hi guys,
What I'm looking for is suggestions on short shell/perl/python/ruby scripts or bash or zsh one liners that will peg the processor. Any ideas? v |
# maxcpuuser, maxcpusystem: to max out the user & system CPU usage
alias maxcpuuser='yes > /dev/null' alias maxcpusystem='cat /dev/zero > /dev/null' |
Thanks Hayne.
I do have one other thing that I'd like to consult the board about. I've been trying to max out the RAM usage as well, but have not been very lucky. Tried the following: Code:
% hdid -nomount ram://`expr 2 "*" 1024 "*" 1024`My ram usage (viewed in activity monitor) did not seem to change much at all. Was looking at Free, VM Size, Active, Inactive, Wired, etc. all stayed about the same. I have about 1.25GiB of RAM. Am I fighting an uphill battle? I figure that perhaps the VM code is smarter and that pages have to be reused somehow, or that I have to max out the 4GB per app, or that I have to exceed somehow the total VM Size. I know I could use something like memtest, but if at all avoidable I'd love to be able to type in a one liner to stress-test a machine. -v |
memtestosx.org now runs as well while OS X is booted.
It's still a better idea to use it in single-user mode, but knowing that might be helpful for what you're trying to do. |
There are lots of ways that you could use up memory via the command-line.
For example: perl -e '$str = "a" x 1000000; sleep 1000' But this isn't very controlled - it's hard to know how much memory you actually will use up with the above command. You could use my "VMTester" utility (http://hayne.net/MacDev/VMTester/) to stress test the RAM usage. |
Thanks guys, I appreciate all the info.
-v |
yes > /dev/null
|
Quote:
:) |
It does look a lot like that, minus the elegance of the aliasing...
But again, reading the whole thread before posting an answer seems to be no longer "en vogue", even if the number of posts count in the single digits. |
I've been using all the suggestions here, thanks a lot everyone.
I wanted to finally ask about using up the HD. I know I can pipe random to a file on the HD to make a big file, but how would I limit it to a certain size? I would like to be able to create, read and move big files to test if the HD is failing. |
Quote:
Sample usage: ./randBytes 10485760 > 10MB.dat Code:
#!/usr/bin/perl |
hdiutil create -size 20g big.dmg
the -size parameter accepts kilo/mega/giga/tera/peta/exa bytes |
It's not kilo! It's actually 'sectors', which equal 512kb. I read the man page. :)
|
From the hdiutil man page:
Code:
Size specifiers: |
Quote:
Such files are not suitable for many tests of performance. If a file filled with zeros is acceptable, it can be done via '/usr/sbin/mkfile' |
Quote:
If you are testing things like user quotas then the file content shouldn't make any difference. |
Thanks guys, all your replies are great. I think I'll stick with either mkfile or hdiutil. Hayne, thank you very much for the script.
The reason I'm looking for one liners is that this is to test on customer's machines. The previous commands to peg the processor allow me to test under stress overnight, these allow to test if kernel panics (or any strangeness) are caused by Disk Usage. As you guys probably have noticed, I'm a repair tech. The one liner lets me just type it in, then quit terminal to reset. If I have to copy scripts and apps to the users machine, it complicates clean-up time. Plus I might not be the person finishing the repair. |
Using a USB stick and copying any test scripts you may want to use to /tmp will allow for more complex tests and be gone on reboot nonetheless.
It might even be a good idea to come up with a shell script a bit like applejack.sf.net to include memtest and various stress test scripts that go beyond AHT. Running in single user mode, you'd get rid of everything that can go wrong with the GUI and really just test hardware. |
Quote:
|
Quote:
Still, memorized one-liners do have a lot of advantages. Hayne: I'll take funky new filesystems into consideration if and when they actually become mainstream. Vonleigh: If you really need a giant pseudo-random file, can't access the necessary scripts, and need to test the optical drive as well, cat the .vob files from any convenient DVD together. |
Good idea on the vob files. I also hadn't thought of running them from /tmp. I do have access to ASD, which is more thorough than AHT.
BTW, zfs looks really good, read about it a little while ago. Would be really good if ported to Leopard. What I'm unsure of is how it'll work with the concept of the storage pool and if that's compatible with the virtual filesystem that mac os x has. |
| All times are GMT -5. The time now is 05:29 PM. |
Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2014, vBulletin Solutions, Inc.
Site design © IDG Consumer & SMB; individuals retain copyright of their postings
but consent to the possible use of their material in other areas of IDG Consumer & SMB.