Zope Startup Questions II (Son of Zope)
Hello again. Thanks for the answers to my previous mail, the answers were to-the-point and, as all good answers do, provoked some more questions :) Without further ado, I give you - the questions: - Zope DB size We already have 300 artists, and expect it to come close to 1000 after a short while. The mp3s should be separated from the main DB, obviously, but separating bitmaps from the rest of the data is not desirable. However, with 1000 artists each having - say, 300K of data, one sooner or later approaches the 2G limit for linux boxes when counting the other content on the site. Does anybody know if this is a limitation of the kernel or the FS? And is it going to be fixed in the foreseeable future? On a side note, we plan on storing the mp3s on several servers spread throughout the country, and pass all requests for files to an object that queries the servers for their individual loads at the moment. Any thoughts on how to implement this in an elegant way using Zope/Python? - Backup How does one perform a backup of a file (data.fs) bigger than 1G without going offline? Even copying the file to another location to back it up from there could cause strange behaviour if people are changing details in the database along the way? Or is there something I haven't understood yet? To stop the transactions in the DB every time one does a backup (and 2G of data takes somr time to back up, no matter what solution :) - Multiple servers I've seen you mention ZOE in some of the discussions here, but could not find any information about it on the Zope site. Is this an external or internal product? What does it do? Any info appreciated. - And something from the previous mail:
6. This is a bit specific, but anyway: users should be able to write "http://mp3.no/artistname" and be sent directly to the artist with that name. Does Zope have any specific mechanisms for doing this?
Michel wrote:
Uhm.... yes, but it would require some trickery if you want that information to come from, say, a database, but nothing to complicated.
Otherwise, you can just create a top level folder for each artist. This could get pretty hard to manage however, you might want to consider http://site/Artists/artisname
This is not an alternative, simply because people want simple URLs, and when they want to promote themselves they prefer URLs people remember. Thus, mp3.no/artistname is important. :) So, could you explain (roughly) the trickery involved in this? In addition, we are going to have two classes of users: artists & listeners, with slightly different needs: Artists should be able to upload songs, enter text for their webpage etc. The listeners should be able to have a customized "view" with login/cookie - a kind of "my.mp3.no" (รก la my.excite.com) where they can pick the artists they are interested in, their own "skin" for the website, notification by email when one of their favourite artists releases anything and so on. It would be entirely acceptable for us to have two folders "artists" and "listeners" with the users in, as long as it is transparent to the users. Suggestions on how to best solve this in Zope are welcome (seeing as the Zen is not that strong in us - yet :) Thanks for your time, Alexander Limi MP3 Norway - http://mp3.no
On Sat, 13 Nov 1999, Alexander Limi wrote:
Does anybody know if this is a limitation of the kernel or the FS? And is it going to be fixed in the foreseeable future?
It is a limitation of the FS, and it will be fixed but I do not know when. There are paches available though. If you do a search on the Zope list I think someone mentioned the patches.
throughout the country, and pass all requests for files to an object that queries the servers for their individual loads at the moment. Any thoughts on how to implement this in an elegant way using Zope/Python?
Just an idea. Use straight ZServer/ZPublisher for high performance. Then create an object that publishes the relevant info on a different port. You can the query each server for its load. I have tried something similar some time ago, but to monitor filesystem usage and the responces were very fast. Pavlos
Pavlos Christoforou wrote:
On Sat, 13 Nov 1999, Alexander Limi wrote:
Does anybody know if this is a limitation of the kernel or the FS? And is it going to be fixed in the foreseeable future?
It is a limitation of the FS, and it will be fixed but I do not know when.
The next edition of Red Hat Linux will include a journaling filesystem- i believe this is the one from SGI- which allows files in the petabytes... -- Ethan "mindlace" Fremen you cannot abdicate responsibility for your ideology.
Ethan Fremen wrote:
Pavlos Christoforou wrote:
On Sat, 13 Nov 1999, Alexander Limi wrote:
Does anybody know if this is a limitation of the kernel or the FS? And is it going to be fixed in the foreseeable future?
It is a limitation of the FS, and it will be fixed but I do not know when.
The next edition of Red Hat Linux will include a journaling filesystem- i believe this is the one from SGI- which allows files in the petabytes...
This is incorrect. XFS has yet to see the light of day for Linux. Much more probable (since it is _useable_ ) would be ext3 and/or ReiserFS, which both have alpha-level journalling support. Currently ext3 journals _all_ of the data, as opposed to meta-data only. It is also possible to apply a patch for larger file sizes on ext2. Of course, running Linux on an Alpha will also avoid any filesize limitations. -- "They laughed at Columbus, they laughed at Fulton, they laughed at the Wright brothers. But they also laughed at Bozo the Clown." -- Carl Sagan
Bill Anderson wrote:
This is incorrect. XFS has yet to see the light of day for Linux. Much more probable (since it is _useable_ ) would be ext3 and/or ReiserFS, which both have alpha-level journalling support. Currently ext3 journals _all_ of the data, as opposed to meta-data only.
And the 2GB limit is in the Linux VFS - XFS will (assuming it comes out) change it so it supports more than 2GB, but from what I understood neither ext3 nor reiserfs will support >2GB files on 32-bit Linux. -- Itamar - itamars@ibm.net
Itamar Shtull-Trauring wrote:
Bill Anderson wrote:
This is incorrect. XFS has yet to see the light of day for Linux. Much more probable (since it is _useable_ ) would be ext3 and/or ReiserFS, which both have alpha-level journalling support. Currently ext3 journals _all_ of the data, as opposed to meta-data only.
And the 2GB limit is in the Linux VFS - XFS will (assuming it comes out) change it so it supports more than 2GB, but from what I understood neither ext3 nor reiserfs will support >2GB files on 32-bit Linux.
It is possible to break the 2GB limit on Linux-X86, it requires teh Large File Summit patch, wich can be found via kernelnotes.org. Have not yet trie dit myself, but it is on the list for this month. :-) It has been indicated that if the LFS patch works out well enough, without introducing too many new bugs, it may go into ext3. Only time will tell. Bill -- "They laughed at Columbus, they laughed at Fulton, they laughed at the Wright brothers. But they also laughed at Bozo the Clown." -- Carl Sagan
participants (5)
-
Alexander Limi -
Bill Anderson -
Ethan Fremen -
Itamar Shtull-Trauring -
Pavlos Christoforou