Loading ...
Sorry, an error occurred while loading the content.

4257Fulldome Production Storage and Network Solutions for 4k and beyond

Expand Messages
  • Paul Mowbray
    Oct 9, 2013
    • 0 Attachment
      Hello fellow Fulldome producers

      We are planning an upgrade to a 10GbE based network and a new storage solution and was wondering what route other Fulldome studios have gone down to ease the challenge of 4k+ Fulldome production. 

      We currently have a glorified NAS which is a windows server box with 24 x 2TB disks in a RAID 6 array. We have another one of these just running Windows 7 as an archive and backup server. We use a very organised manual file structure, incremental WIP stream file saving and x-referencing a lot in 3D files. We have no asset management check in/out system and just use man management and good communication to ensure nobody accidentally overwrites any files. We work with 1k proxies a lot and test with 4k stills and then go to full 4k renders towards the end of the project.

      The problem with this storage configuration is that if you have to rebuild the raid for whatever reason it takes a very long time affecting performance during the re-build. Everything is shared with just a simple windows share so there are performance limitations. Also if the server itself has a hardware failure, not the RAID all your files will be offline until you can bring the server back online. A project backup on another storage device will not be that useful as you won't be able to easily recreate all the network shares which are usually explicit UNC paths. 

      Sequential storage speed for working with 4k files plus is very important but a more serious issue for us is maintaining good levels of performance in a multi-user environment whilst 15+ users are all accessing the same storage space.

      We could continue down the path of a mega NAS and just spec something that is faster and has faster bandwidth in and out but I'm curious if others have explored SAN environments and if they make sense in a multi-user Fulldome CG workflow or not or anything else either conceptually or turn-key solutions that people have tried or looked at. We are open to adapting our workflow to suit any new technology solution.

      Our current networking is gigabit with a dual gigabit teamed link from the file server to the switch and a quad teamed link connecting the switch near the file server and the switch which connects to our workstations.

      Regards 10GbE networking this is new to me but the prices seem to be almost acceptable now for the performance gains. The big issue is actually having enough bandwidth coming from the storage to fulfil multiple 10GbE connections to multiple workstations and how best to manage QOS for multiple users without killing performance for all but 1 or 2. Direct attaching to a SAN would give good line speed and reduce the need for some switching if the location of storage and workstations isn't too far.

      As always our budget is modest to achieve this so every penny has to help towards increased performance, redundancy and availability but inevitability compromises will have to be made along the way.

      Happy to share what route we end up taking if others do the same!

      Paul Mowbray
      Head of NSC Creative
      National Space Centre, Exploration Drive, Leicester, LE4 5NS, UK
      Tel:  +44 (0) 116 2582117

    • Show all 7 messages in this topic