PDA

View Full Version : how to transfer file between mac and linux


stephenk
05-09-2003, 12:06 PM
I have tons of file stored in my powerbook right now and i wanna to save it to my home liunx computer since it got huge disk space, but i don't know how ...would anyone please tell me how to do so please.

hayne
05-09-2003, 12:27 PM
You'll have to give us more info about your home setup. Are your computers all on a home network?

If so, you have many options. Probably the easiest is ftp. You can do it in two directions (push or pull). Just start an ftp server on one machine and run an ftp client on the other. For running servers on the OS X machine, see the System Preferences "Sharing" section. Clicking the ftp checkbox there turns on the ftp server.

Other options include enabling NFS or SMB on your Linux box and the using "Connect to Server" from the OS X Finder.

stephenk
05-09-2003, 03:40 PM
I am using mac os x 10.2.5 and redhat 8.0. I think smb won't work properly. but NFS will, but i don't know how to setup a nfs between rehat and mac. woud u mind to tell me ,please

hayne
05-09-2003, 04:50 PM
I'm afraid you're on your own for setting up an NFS server on Linux. I've never even tried but I'm sure there is a HOWTO on this topic - so just search.
It is also possible to setup an NFS server on OS X but again I've never done it and I don't think it comes installed by default. But I think I recall there was a macosxhints article about this so again, have a look around. But if you are going to do it that direction, maybe makes sure first that you do have NFS client facilities on your Linux machine. (NFS client is built-in to OS X via "Connect to Server" and it works well.)

As I said before, maybe it is easiest to just do ftp as needed.

Oh - there's also the possibility of WebDAV. Again, OS X has the client-side built-in. If you want to run the server side on OS X, there are instructions on how to install this with Apache somewhere, so again have a look. (webdav.org is a good starting place)

stephenk
05-11-2003, 06:56 PM
um.....ftp isn't the way i like, coz it is very easy to crack. even thought i don't think ppl can trace it, but it is too simple. actaully ssh is much better, at least they have encrpyted passwd,

but anyway thanks, i will look for NFS.

yellow
05-11-2003, 08:06 PM
scp (secure copy) is kind of a pain, but it works. It sends/gets a file via ssh. Look it up using man for more info. Here's a quick synopsis:

To send a local file:

scp /path/file username@machine.ip.addy:/path/to/save/file/

To get a remove file:

scp username@machine.ip.addy:/path/to/file /local/path/to/save/file

For Example:

"scp .tcshrc yellow@apple.com:~/." will copy the .tcshrc from my local machine to the machine apple.com and place it in my home dir.

Please ignore the underlining of the username@blah.blah, the submitter cgi thinks it's an email address.

Hope this helps a bit..

yellow
05-11-2003, 08:08 PM
Also, if you set up your sshd on your LINUX box to allow for SFTP, you can use any number of GUIfied FTP Clients that support SFTP to move the files. My personal favorite is Transmit (http://www.panic.com/transmit/).

womby
05-15-2003, 11:32 AM
install samba on your linux box.
it should come 99% configured when it is installed.

make a user called backup.

add encrypted samba password for backup user
type smbpasswd -a backup

connect to user backup from your mac

go menu >> connect to server

smb://serveraddress/

login as backup

copy files over

disconect.

dont use scp or ftp you will loose your resource forks. you will also lose your resource forks if you rename the files from the linux machine.

most files on osx dont use the resource forks any more but some still do and it is a real pain in the arse when it goes wrong.

I was creating a large batch of quicktime files (I did it wrong) when I striped the resource fork from the file they stoped working but I had already deleted the originals.

luckily I still had the source files but a weeks rendering had to be performed again.

womby
05-15-2003, 11:35 AM
Originally posted by stephenk
um.....ftp isn't the way i like, coz it is very easy to crack. even thought i don't think ppl can trace it, but it is too simple. actaully ssh is much better, at least they have encrpyted passwd,

but anyway thanks, i will look for NFS.

oh and NFS is the least secure system you could ever imagine. there are no passwords.

user connects to server
server sez "whats your username"
user sez "bob"
server sez "here are your files"

tlarkin
05-15-2003, 12:32 PM
I use samba because its easy to use, and I am not a UNIX/Linux genius. I know enough to just get by.

Also, I use webmin to manage all my servers such as samba and apache.

houchin
05-15-2003, 02:27 PM
Originally posted by stephenk
um.....ftp isn't the way i like, coz it is very easy to crack. even thought i don't think ppl can trace it, but it is too simple. actaully ssh is much better, at least they have encrpyted passwd,

but anyway thanks, i will look for NFS.

I think most of us had assumed you would do this when both systems were on your home network. If that's the case, what are you worried about security for. If people can snoop on those sessions, then you have much bigger security problems than just this file transfer.

ssh will also be much slower. If you still don't want to use FTP, why don't you just change your password to a throw-away before you do the transfer, and then change it back after you're done?

I have attempted to set of NFS a while ago, and it is definitely not for the faint of heart. I think getting Samba working (put the server on the Linux box) and have the Mac be the client) would be much easier.

You can also try installing netatalk, which is an Apple File Protocol implentation for unix derivations. You can get it at: http://netatalk.sourceforge.net/

But I'd still try getting Samba working.

On a separate note, have you dealt with the issue of resource forks from those Mac files that you want to store on Linux? If you're just archiving the files over there for safe keeping and don't want to access them all the time, I would stuff anything that contained a resource fork before you copied it over to the Linux box.

stephenk
05-16-2003, 05:01 AM
first, i must thanks for all reply. I have been thinking using samba too, but one of my friends told me that when he transfer files between mac and linux machine, some files will corrupt, (he said those files transfer from mac to linux and achieve from linux back to mac, will be missing some bytes and corrupt), is that true ?
and i am afriad some of my important files will get killed in this way, that's why i am hestiating.

and this i my case. my mac is a powerbook, so sometimes i will bring it out with me, and if i want to transfer files back to my linux machine, is samba a good way to do it? if so, are there any additional setting that i have set inorder to get all the thing going

thanks
stephenk

stephenk
05-16-2003, 05:08 AM
Originally posted by houchin
On a separate note, have you dealt with the issue of resource forks from those Mac files that you want to store on Linux? If you're just archiving the files over there for safe keeping and don't want to access them all the time, I would stuff anything that contained a resource fork before you copied it over to the Linux box.

what are the resource forks you guys are talking about? would u mind to explain what is going on?

ylon
05-16-2003, 07:17 AM
OpenAFS might be what you're wanting.

http://www.openafs.org/

bluehz
05-16-2003, 08:25 AM
I use a Slackware Linux box as my main server on our LAN and I regularly move files back and forth. I have tried NFS, Samba, and Netatalk and they are all hard to setup, hard to maintain, pose security issues, and are inconsistent. I personally use scp and sftp to do all my transfer. If I want to quickly move something over manually - I would use scp - as in the example above. I use ssh-login on my Mac - so with the right setup and exchange of SSH keys I don't even have to enter a password. Another method I use is to drop files I want to move into a specific dir onteh mac, and then run a shell script I wrote - that moves everything to the server, and changes permissions on the files after they get moved over, then asks if you want to delete the local files or not. Using sftp you can perform functions over the network during the whole transfer process (e.g. copy over, then change permissions). It is specifically setup for me to move stuff over to a publically accessible web space so I have tuned it for perms, locations, etc that reflect that, You prob don't want to use the script as is... but it might give you some ideas:
#!/bin/zsh

##############################################################################
# USAGE: upload-linux
# DESCRIPTION:
# upload-linux - uploads file to
# from specific dir on local machine
# to specific dir on Linux server
# then changes permissions on
# files that have been transferred
##############################################################################

###
# Variables
###

locald="/Users/documents/uploads"
remoted="/Var/www/uploads"

###
# Usage
###

usage="\
Description: Uploads all files in $locald
to server $remoted
Usage: upload-linux"

if test $# -eq 1; then
echo "${usage}" 1>&2
exit 1
fi

###
# Main process
###

cd $locald
sftp -b /dev/stdin root@linux <<EOF
cd $remoted
put *
chmod 644 *
chown 99 *
chgrp 98 *
quit
EOF

echo -e "Do you want to delete the local files?: \c"
read fname
if [ $fname = yes ] || [ $fname = y ]; then
echo "Deleting local files..."
rm $locald/*
echo "...done!"
else
echo "Local files saved!"
fi

I have also created a droplet using a unix shell script and converting it to a drag and drop app with the fantastic DropScript. Let me know if you want info on that process.

houchin
05-16-2003, 08:37 AM
Originally posted by stephenk
what are the resource forks you guys are talking about? would u mind to explain what is going on?

On the HFS and HFS+ file systems (the Mac specific file systems), when you point at a particular file, the system could possibly be pointing at two different files.

The data fork is what most people consider to be the file. This is what gets correctly transferred to the PC/Linux/... when you do a plain file copy.

The resource fork is a separate portion of the file that has a standard, database-like format. In Mac OS 9, it's primarily used to store application code, but it can also be used to store document data (however, most apps have been moving away from this for several years).

If you are indeed copying files with resource forks (I'm not sure if there's an easy way to find out) that are important, then the only reliable way to transfer them to a PC/Linux file system is to stuff (Stuffit, not just any archive) them first.

However, if you're talking about copying PC compatible files (MS Office docs, JPEG/GIF/TIFF files, PDF files, ...) then you can safely ignore any resource fork issues.

About the only way I can think of testing whether this is safe for a particular file is to use the command line cp program to copy the file in question, which will not copy the resource fork. If the copied version is still OK, then it's safe to ignore the resource fork.

womby
05-16-2003, 09:02 AM
the mac will deal keep your resource forks totaly intact and if you copy files to a server and back again they will be identical to the originals. (except modification data)

where the problem happens

ad mentioned above the mac filesystem supports seperate forks contained within a single file.

when you copy a file to another filesystem that doesnt support resourceforks the mac is very helpful and splits the file into its seperate forks and saves each as there own file.

you dont see these seperate files if you look at the filesystem from the finder or any gui interfaces but using the terminal they are visable.

if you then rename or modify the files directly (not using the macs transparent interface to them) it is possible to break the link between the forks.

the resource forks are saved as files starting with
._
if the names dont match the files become corupted.

but over time it will become less and less of a problem all of the new file formats for osx tend to avoid using the resource fork.

use a linux server but dont move any of the files around directly on the linux box or using the terminal ... always use the finder and you will have no problems

Titanium Man
05-16-2003, 09:12 AM
Unless you've got a gigabit ethernet network, the fastest way will be to purchase a firewire hard drive to back up to. I've done the whole backup-my-OS-X-machine-to-my-Linux/FreeBSD-box thing, and besides worrying about preserving resource forks, I was moving large files (my home directory) over ethernet, which was very slow. With a firewire drive, the transfer speed is much faster, and there are no worries about different file systems speaking the same language. Just my 2 cents. However, if I must back up over the network, I like rsync. It doesn't preserve resource forks, but the manpage has a LOT of options, including tunneling over ssh. There is a version in cvs at www.opendarwin.org which will preserve resource forks, but the version has to be the same on the sending machine and the recipient. I don't know if you'd be able to compile the resource fork saavy version on a Linux box.

thatch
05-17-2003, 07:34 PM
I am trying to clone my linux box to my mac box for a backup with a Mac OS Standard format (HFS not HFS+) partition on the mac... with first, samba server run as a daemon set up on the linux box, and secondly, rsync run as a daemon on the linux box, both over the home ethernet lan, neither of which have I had total success due to permissions. I can get the files moved but the permissions are whacked. It seems to turn out to be that 'chown whatever' doesn't work on an HFS volume.

Can anyone please verify that to be true? (you can't change owner and or group on an HFS volume)

And if that's so, then I am wondering how to make this backup work? Locally on the same linux box, such a backup scheme would work fine. But networked across two boxes is where the trouble for me is, with HFS on the receiving (Mac) end.

I know there are other ways of doing backup of linux boxen but I'm interested in this method I've described because it's all I have to work with at the moment, ie. plenty of room on my mac but not any to spare on the linux box. :eek:

bluehz
05-17-2003, 09:20 PM
thatch - you say your permissions are getting whacked on the transfer. What is leading you to believe this. One thing that may be problematic is unless you have synchronized user id's (UID) and group id's (GID) on each box - then you may get some weird transposition. For example - say on your Mac, that you are UID 501, but on the Linux box you are UID 502. Well when you transfer those Linux files to the Mac box they will show up as UID 502 - which is basically an unmapped user on the Mac. Just a though - not sure this is your problem though.

Can I assume you are running rsync as root on both the Mac and Linux end of things during your backup?

thatch
05-17-2003, 10:49 PM
bluehz, all my perms are showing to be my user name and group unknown after the transfer using either samba or rsync. I have set both servers to uid = 0 and gid = 0. And yes, both sides are run as root.

I narrowed it down to the problem being with HFS and that is why I was hoping someone could confirm that, in fact, perms cannot be changed on that file system. Please, if anybody has a spare empty partition at hand, format it to HFS standard and try to 'chown root:wheel' or something on it and let me know if you are successful.

bluehz
05-18-2003, 01:02 AM
I have not tried the HFS test myself - but I did run across this dated article from 2000:

http://www.mit.edu/people/wsanchez/papers/USENIX_2000/
The HFS+ volume format, unlike the HFS volume format, fortunately provides storage for Unix-style metadata (eg. owner and mode bits). This made it a lot easier to enable the use of HFS+ in the Darwin environment. However, a few incompatibilities still exist.

this is an interesting article on HFS and Linux

http://www-sccm.stanford.edu/Students/hargrove/HFS/README.html#toc1

thatch
05-18-2003, 01:31 AM
Hmm. The caption you posted would seem to suggest that HFS doesn't hold Unix-style metadata. However, I have found it to work fine on a local linux box when using an HFS exchange partition. I still don't know why a networked HFS partition wouldn't work though.

The test format that I requested would only take literally seconds to do. It's real simple using Disk Utility on Mac OS X. Format back to HFS+ is just as quick too. I know it's not likely that someone readily has an empty partition for this test though. But hopefully someone does and will post here eventually as that is the only full proof of this that I can be sure of. The smoke test, if you will.

bluehz
05-18-2003, 06:48 AM
Ok - I tested this out creating an HFS formatted disk image. User/group and permissions are definitely borked on HFS disk. Here's the technique I used:# create 25mb HFS formatted disk image named "test.dmg" and volname "untitled"
# verified that the disk was HFS formatted
# and mounted as "standard mac disk" in Finder

hdiutil create -size 25m -fs HFS test

# mount disk image

hdiutil mount test.dmg

# rsync test run

sudo rsync -avr /usr/bin/ /Volumes/untitled/

##########
# RESULTS
##########

1. Many errors appeared relating to symlinks
2. All files had owner/group set to me and "unknown" - even though I used root to run rsync.
3. All file permissions on all files were reset to 600 (rw-------) on all files and dirs on the HFS volume.
I also tried the above process logged in as root, mounting the disk as root, and running rsync as root and still got the same results.

thatch
05-18-2003, 12:08 PM
You have definitely confirmed what I had thought, that perms aren't going to work on an HFS volume. Thanks bluehz. Your results were the same as mine.

There's just one more thing to try if you still have the volume and that would be to try a chown on it to see if you were able to change the owner and or group to something else than what it currently is. I'm almost certain that it would not work, saying something like operation not supported. Either that or it might seem to do the job but when you check to see, it would not have.

I still don't know or understand why HFS won't work over the network but does on the same linux box. It's a mystery.

Titanium Man
05-18-2003, 12:35 PM
That's interesting stuff, bluehz. I just realized that if you create two identical disk images; one HFS, and the other HFS+, then mount them and do "Get Info" on them, then look under the "Ownership & Permissions" section, only the HFS+ volume has the option for "Ignore ownership on this volume". It's checked by default, and I can only change permissions on the volume if I uncheck it. Otherwise, it acts like the HFS volume (and won't budge at all when I try to change the group from "unknown" to something else).