Loading ...
Sorry, an error occurred while loading the content.

New Sources in PCGen

Expand Messages
  • Eddy Anthony
    In the past few releases you may have noticed that we have been adding new sources fairly regularly. New sources are added into the Alpha folder in the data
    Message 1 of 1 , Oct 29 7:29 PM
    • 0 Attachment
      In the past few releases you may have noticed that we have been adding new
      sources fairly regularly. New sources are added into the Alpha folder in the
      data directory. While this directory structure is not apparent from within
      PCGen it is important to note because the Alpha folder is not included in
      the production release. Sources in the Alpha directory are there because
      they are new and have not undergone a complete QA review or they are not up
      to production standards for some other reason. Once they have been
      thoroughly tested they are moved to a permanent folder in the data

      We need your help with this process.

      If you own any of these books and would like to see them in the production
      release we need your feedback. Please report any bugs, missing features
      incorrect information. Additionally, if you are using the source and have no
      problems with it we would like to know that as well. Feel free to compliment
      the Data Monkeys on a job well done, knowing that a source they have written
      is being used and appreciated is about the only reward they get.

      I¹ve often felt that this community is the strongest QA tool we have. We
      have been getting a lot of great QA on the RSRD and other core datasets
      lately and I want to make sure that everything reported gets trackered. You
      guys have been doing a great job at that and I hope you will continue to do
      so. We are shorthanded on almost all of the teams but right now we could
      really use more QA Monkeys (especially on these Alpha sources) and Tracker
      Monkeys. Building a robust team of Data-QA Monkeys is essential if we are
      too improve the speed at which new sources are entered.

      Here is a list of datasets which are in Alpha as of version 5.7.8

      Dawnforge Campaign Setting by Fantasy Flight Games
      Dawnforge - Age of Legend by Fantasy Flight Games
      Encyclopaedia Arcane - Enchantment by Mongoose Publishing
      Encyclopaedia Divine - Fey Magic by Mongoose Publishing
      Faeries by Bastion Press
      Path of Magic by Fantasy Flight Games
      Path of Shadow by Fantasy Flight Games
      Seafarer's Handbook by Fantasy Flight Games
      Shadowforce Archer by Alderac Entertainment Group
      Sidewinder: Recoiled by Dog House Rules
      Slayer's Guide to Minotaurs by Mongoose Publishing
      Sorcery and Steam by Fantasy Flight Games
      Fort Griffin Echo Vol. 1 No. 1 by Dog House Rules
      Vlad the Impaler: Blood Prince of Wallachia by Avalanche Press
      XCrawl?: Adventures in the Xtreme Dungeon Crawl League
      Sellout! Sourcebook for XCrawl

      This list will be maintained on our Wiki site which will list other related
      information such as the Lead developer of the source, the publisher, etc..
      You can find that list here:

      New Source Development Process

      We have documented the process we follow by which datasets are developed and
      included in the production releases. This is now included in the docs and
      can be found online here:

      The major addition to the process is that once a dataset is at the point in
      development where it can be entered into the CVS repository it gets
      trackered to an internal tracker system set up for this purpose. Data must
      verify that the set can be run and has no major errors which prevent the
      source from being loaded. OGL must verify that all the OGL and PI material
      is correctly identified and that it does not violate the OGL. PL must
      provide permissions from the publisher. All of these aspects must be
      approved before the source can be entered into CVS and by handling it with a
      tracker the approvals are entered into a single permanent location and those
      responsible for each step can be assigned the tracker when there attention
      is required. It is my hope that this will speed up this part of the process
      so that a source with no issues can be approved and entered into CVS within
      a week of being submitted. We¹ve been doing this for about two months and
      its worked fairly well so far.

      We are open to any comments and suggestions regarding this process. It is
      our goal to streamline it as much as possible. If anyone has any questions
      are comments about data development or anything content related feel free
      to drop me a line.
      ~ Eddy Anthony (MoSaT)
      ~ PCGen Content Silverback
    Your message has been successfully submitted and would be delivered to recipients shortly.