Loading ...
Sorry, an error occurred while loading the content.

4261RE: Fulldome Production Storage and Network Solutions for 4k and beyond

Expand Messages
  • antoinedurr
    Oct 10, 2013
    • 0 Attachment

       Yeah, having 48 TB of data suddenly go offline can put a dent in your workflow.  And as you mention, even if you do have a backup, getting all that data to a place where you can use it efficiently is a slow process.  Have you considered /reducing/ the size of your NAS(es), but having more of them?  If one goes down, it sidelines only part of your facility rather than the entire thing at once!  At that point you'll have some setups in place to allocate space only on non-rebuilding arrays, and once the rebuild is done, then you can go back to creating files on it.  You could also reduce the priority of the rebuild (at the expense of having it take longer).

      Assuming you have, let's say, 3 or 4 mid-size NASes of 20-30 TB each, you'll have some redundancy in hardware.  One goes down and assuming the spindles are attached via SAS, if the computer goes down you can reattach the drives to a different computer's SAS ports.  If the disk chassis fails, you might have a spare by that point and can migrate the drives pretty quickly.

      Finally, performance: the fundamental drawback of these NASes is going to be access speeds.  Sure you can get some pretty nifty MB/sec numbers out of them, but when they start getting hammered from all sides, everyone will suffer.  I'd seriously consider something like an Avere caching appliance.  It's mondo expensive, but it has everyone write to flash memory instead of spindle disks, and then behind your back goes and actually writes the data to spindle drives.  It'll keep the latest data on the appliance, so retrieving it is now super fast.  Yeah, if you load up something you haven't touched in 2-3 weeks, it'll have to come off disk, but since that disk is no longer getting thrashed by the renderfarm, load speed will be pretty acceptable.  And then after the first viewing/rerendering, it's not in the cache applicance.  That should take care of most performance issues.

      -- Antoine

      ---In fulldome@yahoogroups.com, <fulldome@yahoogroups.com> wrote:

      Hello fellow Fulldome producers

      We are planning an upgrade to a 10GbE based network and a new storage solution and was wondering what route other Fulldome studios have gone down to ease the challenge of 4k+ Fulldome production. 

      We currently have a glorified NAS which is a windows server box with 24 x 2TB disks in a RAID 6 array. We have another one of these just running Windows 7 as an archive and backup server. We use a very organised manual file structure, incremental WIP stream file saving and x-referencing a lot in 3D files. We have no asset management check in/out system and just use man management and good communication to ensure nobody accidentally overwrites any files. We work with 1k proxies a lot and test with 4k stills and then go to full 4k renders towards the end of the project.

      The problem with this storage configuration is that if you have to rebuild the raid for whatever reason it takes a very long time affecting performance during the re-build. Everything is shared with just a simple windows share so there are performance limitations. Also if the server itself has a hardware failure, not the RAID all your files will be offline until you can bring the server back online. A project backup on another storage device will not be that useful as you won't be able to easily recreate all the network shares which are usually explicit UNC paths. 

      Sequential storage speed for working with 4k files plus is very important but a more serious issue for us is maintaining good levels of performance in a multi-user environment whilst 15+ users are all accessing the same storage space.

      We could continue down the path of a mega NAS and just spec something that is faster and has faster bandwidth in and out but I'm curious if others have explored SAN environments and if they make sense in a multi-user Fulldome CG workflow or not or anything else either conceptually or turn-key solutions that people have tried or looked at. We are open to adapting our workflow to suit any new technology solution.

      Our current networking is gigabit with a dual gigabit teamed link from the file server to the switch and a quad teamed link connecting the switch near the file server and the switch which connects to our workstations.

      Regards 10GbE networking this is new to me but the prices seem to be almost acceptable now for the performance gains. The big issue is actually having enough bandwidth coming from the storage to fulfil multiple 10GbE connections to multiple workstations and how best to manage QOS for multiple users without killing performance for all but 1 or 2. Direct attaching to a SAN would give good line speed and reduce the need for some switching if the location of storage and workstations isn't too far.

      As always our budget is modest to achieve this so every penny has to help towards increased performance, redundancy and availability but inevitability compromises will have to be made along the way.

      Happy to share what route we end up taking if others do the same!

      Paul Mowbray
      Head of NSC Creative
      National Space Centre, Exploration Drive, Leicester, LE4 5NS, UK
      Tel:  +44 (0) 116 2582117

    • Show all 7 messages in this topic