Subject: RE: Files over 2Gb?
From: Michalowski Thierry (Thierry.Michalowski@edipresse.ch)
Date: Wed Nov 01 2000 - 07:19:46 EST
Linux on a 32-bit platform (i.e x86) cannot natively handle those >2Gb
structures.
I believe there is a patch you can apply to at least your kernel to handle
this.
Or you can switch to a 64-bit platform, where a cheap Sun Ultra 5 or an
older Alpha station would do the trick.
Involves changing hardware, till Intel effectively prove they can
manufacture a working 64-bit chipset...
Another point where you could be stuck is the AFP protocol: AFP >2.2 only
handles files of more than 2Gb.
Hopefully there are other people on this list who can explain the current
status of netatalk concerning this AFP version support.
My 0.02 EUR
Thierry
-----Original Message-----
From: Paul Reilly [mailto:pareilly@tcd.ie]
Sent: Wednesday, November 01, 2000 1:07 PM
To: Russell Kerrison
Cc: netatalk-admins@umich.edu
Subject: Re: Files over 2Gb?
> Is it simply the case that the Appleshare file system is a bit dated and,
> like HFS, can't recognise files greater than 2Gb?
>
I would have thought this is the same problem I had with linux.
>From my understanding of reaching this limit in linux, it
is not specific to the filesystem, but is in fact caused by the
data structures in the c programming routines which are used to
represent files in the filesystem. Up to recently, the c data
structures used for low level filesystem routines had a limit of 2 Gb.
For linux, I was told that I could update my kernel C funtions to handle
files larger than 2GB.
The point is I think the reason this limit exists is due to
old data structure sizes in c, not the actual filesystem.
Paul
This archive was generated by hypermail 2b28 : Wed Jan 17 2001 - 14:32:32 EST