The macosxhints Forums

The macosxhints Forums (http://hintsforums.macworld.com/index.php)
-   UNIX - General (http://hintsforums.macworld.com/forumdisplay.php?f=16)
-   -   Pegging the Processor (http://hintsforums.macworld.com/showthread.php?t=55931)

vonleigh 05-21-2006 07:08 AM

Pegging the Processor
 
Hi guys,

What I'm looking for is suggestions on short shell/perl/python/ruby scripts or bash or zsh one liners that will peg the processor. Any ideas?

v

hayne 05-21-2006 07:16 AM

# maxcpuuser, maxcpusystem: to max out the user & system CPU usage
alias maxcpuuser='yes > /dev/null'
alias maxcpusystem='cat /dev/zero > /dev/null'

vonleigh 06-27-2006 05:33 AM

Thanks Hayne.

I do have one other thing that I'd like to consult the board about. I've been trying to max out the RAM usage as well, but have not been very lucky. Tried the following:

Code:

% hdid -nomount ram://`expr 2 "*" 1024 "*" 1024`
/dev/disk3 
% newfs_hfs /dev/disk3
Initialized /dev/rdisk3 as a 1024 MB HFS Plus volume
% mkdir /tmp/ramdisk
% mount_hfs /dev/disk3 /tmp/ramdisk
% cat /dev/random > /tmp/ramdisk/file

Then waited for the cat to fill the ram drive (I'm sure there's a way to just generate a file that large, but alas google didn't help).

My ram usage (viewed in activity monitor) did not seem to change much at all. Was looking at Free, VM Size, Active, Inactive, Wired, etc. all stayed about the same. I have about 1.25GiB of RAM.

Am I fighting an uphill battle? I figure that perhaps the VM code is smarter and that pages have to be reused somehow, or that I have to max out the 4GB per app, or that I have to exceed somehow the total VM Size.

I know I could use something like memtest, but if at all avoidable I'd love to be able to type in a one liner to stress-test a machine.

-v

voldenuit 06-27-2006 05:44 AM

memtestosx.org now runs as well while OS X is booted.

It's still a better idea to use it in single-user mode, but knowing that might be helpful for what you're trying to do.

hayne 06-27-2006 11:50 AM

There are lots of ways that you could use up memory via the command-line.
For example:
perl -e '$str = "a" x 1000000; sleep 1000'

But this isn't very controlled - it's hard to know how much memory you actually will use up with the above command.

You could use my "VMTester" utility (http://hayne.net/MacDev/VMTester/) to stress test the RAM usage.

vonleigh 07-15-2006 03:26 PM

Thanks guys, I appreciate all the info.

-v

Gnarlodious 07-15-2006 10:32 PM

yes > /dev/null

hayne 07-16-2006 01:25 AM

Quote:

Originally Posted by Gnarlodious
yes > /dev/null

I suppose you were just throwing your support behind my suggestion of post #2.
:)

voldenuit 07-16-2006 06:34 AM

It does look a lot like that, minus the elegance of the aliasing...

But again, reading the whole thread before posting an answer seems to be no longer "en vogue", even if the number of posts count in the single digits.

vonleigh 09-08-2006 01:48 PM

I've been using all the suggestions here, thanks a lot everyone.

I wanted to finally ask about using up the HD. I know I can pipe random to a file on the HD to make a big file, but how would I limit it to a certain size? I would like to be able to create, read and move big files to test if the HD is failing.

hayne 09-08-2006 02:02 PM

Quote:

Originally Posted by vonleigh (Post 320389)
I wanted to finally ask about using up the HD. I know I can pipe random to a file on the HD to make a big file, but how would I limit it to a certain size?

You could use a script like the one below.
Sample usage:
./randBytes 10485760 > 10MB.dat

Code:

#!/usr/bin/perl
use strict;
use warnings;

# randBytes
# This script takes one command-line parameter specifying the number of bytes
# desired in the output.
# The script outputs (to STDOUT) the specified number of random bytes.
# Cameron Hayne (macdev@hayne.net), August 2006

die "usage: randBytes desiredNumBytes\n" if $#ARGV < 0;
my $desiredNumBytes = $ARGV[0];

my $buffsize = 64 * 1024; # experiment with different buffer sizes
if ($desiredNumBytes < $buffsize)
{
    $buffsize = $desiredNumBytes;
}
my $buffer;
my $numLeft = $desiredNumBytes;
open(RAND, "/dev/random") or die "Can't open /dev/random: $!\n";
while ($numLeft > 0)
{
    my $nread = read(RAND, $buffer, $buffsize);
    if ($nread <= $numLeft)
    {
        print $buffer;
        $numLeft -= $nread;
    }
    else
    {
        print substr($buffer, 0, $numLeft);
        $numLeft = 0;
    }
}
close(RAND);


acme.mail.order 09-08-2006 08:21 PM

hdiutil create -size 20g big.dmg

the -size parameter accepts kilo/mega/giga/tera/peta/exa bytes

ThreeDee 09-08-2006 08:42 PM

It's not kilo! It's actually 'sectors', which equal 512kb. I read the man page. :)

acme.mail.order 09-08-2006 08:51 PM

From the hdiutil man page:
Code:

  Size specifiers:
  -size ??b|??k|??m|??g|??t??p|??e
            -size specifies the size of the image in the style
            of mkfile(8) with the addition of tera-, peta-, and
            exa-bytes sizes (note that 'b' specifies a number
            of sectors, not bytes).

If you specify 100b, you get 100 sectors. If you specify 100k, you get 100 kilobytes. If you specify 100e, and it finishes, you are the Data Storage King

hayne 09-08-2006 09:08 PM

Quote:

Originally Posted by acme.mail.order (Post 320473)
hdiutil create -size 20g big.dmg

The problem is that this creates files that are almost all zeros.
Such files are not suitable for many tests of performance.

If a file filled with zeros is acceptable, it can be done via '/usr/sbin/mkfile'

acme.mail.order 09-08-2006 10:15 PM

Quote:

Originally Posted by vonleigh (Post 320389)
...read and move big files...

One billion zeros are just as much work to read and write as one billion random numbers. I agree that if you are doing read-integrity checks it doesn't make much sense, but with today's self-analyzing drives is this even a valid testing method anymore? By the time any home-grown test fails there should be something reported by the drive's diagnostics.

If you are testing things like user quotas then the file content shouldn't make any difference.

vonleigh 09-09-2006 09:24 PM

Thanks guys, all your replies are great. I think I'll stick with either mkfile or hdiutil. Hayne, thank you very much for the script.

The reason I'm looking for one liners is that this is to test on customer's machines. The previous commands to peg the processor allow me to test under stress overnight, these allow to test if kernel panics (or any strangeness) are caused by Disk Usage. As you guys probably have noticed, I'm a repair tech.

The one liner lets me just type it in, then quit terminal to reset. If I have to copy scripts and apps to the users machine, it complicates clean-up time. Plus I might not be the person finishing the repair.

voldenuit 09-09-2006 09:53 PM

Using a USB stick and copying any test scripts you may want to use to /tmp will allow for more complex tests and be gone on reboot nonetheless.

It might even be a good idea to come up with a shell script a bit like applejack.sf.net to include memtest and various stress test scripts that go beyond AHT. Running in single user mode, you'd get rid of everything that can go wrong with the GUI and really just test hardware.

hayne 09-09-2006 11:16 PM

Quote:

Originally Posted by acme.mail.order (Post 320497)
One billion zeros are just as much work to read and write as one billion random numbers.

That may be true for HFS+ and other filesystems currently in use on OS X, but it is not true for filesystems that support transparent compression such as ZFS. Note that ZFS is supplied with Solaris 10 and is apparently being ported to OS X, so we may see this with Leopard.

acme.mail.order 09-09-2006 11:37 PM

Quote:

Originally Posted by voldenuit (Post 320621)
Using a USB stick and copying any test scripts you may want to use to /tmp will allow for more complex tests and be gone on reboot nonetheless.

Or put them in a tar file on his website and just download when necessary. It will be a rare company that doesn't have internet access. (Working network card might be another issue)

Still, memorized one-liners do have a lot of advantages.

Hayne: I'll take funky new filesystems into consideration if and when they actually become mainstream.

Vonleigh: If you really need a giant pseudo-random file, can't access the necessary scripts, and need to test the optical drive as well, cat the .vob files from any convenient DVD together.

vonleigh 09-10-2006 12:53 PM

Good idea on the vob files. I also hadn't thought of running them from /tmp. I do have access to ASD, which is more thorough than AHT.

BTW, zfs looks really good, read about it a little while ago. Would be really good if ported to Leopard. What I'm unsure of is how it'll work with the concept of the storage pool and if that's compatible with the virtual filesystem that mac os x has.


All times are GMT -5. The time now is 05:29 PM.

Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2014, vBulletin Solutions, Inc.
Site design © IDG Consumer & SMB; individuals retain copyright of their postings
but consent to the possible use of their material in other areas of IDG Consumer & SMB.