Loading ...
Sorry, an error occurred while loading the content.

Re: Data Loading, Memory, and Data Usage

Expand Messages
  • FerretDave
    Greetings, Agreed - time can be spent better on improving the desktop side first :-) (says a man without a smart phone/tablet) Binary representation - we can
    Message 1 of 35 , Jan 30, 2013
    • 0 Attachment
      Greetings,
      Agreed - time can be spent better on improving the desktop side first :-) (says a man without a smart phone/tablet)

      Binary representation - we can check the 'version' of that and abort if it's different to prevent compatibility issues.

      All of this is a nice to have :-)

      Thanks Tom, I'll retreat under my rock and get back to real work now...

      Cheers
      Dave


      --- In pcgen_developers@yahoogroups.com, Tom Parker wrote:
      >
      >
      >
      > Forgot to respond to this:
      >
      > "I'm thinking along the lines of saving/copying the .pcg and cache file*
      > onto dropbox after I've created them on pc, and let the mobile device
      > sync with that and use them."
      >
      > As best I can tell, this assumes one of two things:
      > (1) The binary representation used for a database is the same on the desktop and the mobile version (not safe and not something I want to debug)
      >
      > or
      >
      >
      > (2) Someone has written some form of binary I/O system, which is what I was trying to avoid in keeping the cache "local". 
      >
      >
      > I realize from an ease of use perspective, what you are proposing sounds nice.  However, I believe this is neither the best place to focus our time, nor an infrastructure we want to (or even can) maintain/update/debug over time. 
      >
      >
      > If we had a huge influx of new developers, my opinion might change, but frankly there are some pretty major pieces of code we have that are tenuously maintained already and that slow down my ability to change the core (NPC Generator comes to mind here).  I'd rather not add another...
      >
      >
      > TP.
      >
      > --
      > Tom Parker
      >
      >
      >
      > ________________________________
      > From: FerretDave
      > To: pcgen_developers@yahoogroups.com
      > Sent: Tuesday, January 29, 2013 4:58 AM
      > Subject: [pcgen_developers] Re: Data Loading, Memory, and Data Usage
      >
      > Greetings,
      > ok, each variant of a mod'd item is stored, cool, so you're keeping cached copies of things per 'group of sources' loaded.
      > Whether that is, as you say, a query with a where clause on one table, a query on a table named (relatively) to the source list, or is in a table in a database named (relatively) to the source list, those are all final design decisions.
      > I think my original line of thought was around only caching the sources that are actively used, so we're on the same lines.
      >
      > >Note ...we have to have an identifier...a matter of where that is stored...(either the dataset ID or the connection object), so none have an advantage there
      >
      > I'd actually say that they have different advantages...whether one is *better* than the others can vary:
      >
      > The second/third options (named tables/database) will perform slightly faster than the first, as they will process all data, whereas the first, with an identifier column, will not only perform (fractionally) slower due to the where clause, but due to the extra column would also take more disk space. 
      > The overhead in disk space for an extra table/database is minimal, but there will be a point - based on number of different source combinations used - at which the first option becomes more efficient spacewise than the others, though most users use only a couple of combinations of sources?
      >
      > There's not a lot in it really though, and it really does depend a lot on the final architecture of such cache/database. I guess my view is that while the separate tables/database is a bit more effort, the end result is that performance is slightly better, and that's the final goal.
      > This is assuming an SQL based approach with tables, as you say, other methods are possible.
      >
      > 3) - you say "lock up" ? did you mean "look up" ?
      > Yes, the first time the app runs (on desktop or 'droid), the cache/database is built, and thereafter the lookup/load of the .lst files is not needed any more (until extra sources are loaded or a newer version of anything is installed).
      > I too am wary of copying binary data around (I work with 64 bit unix and 32 bit linux, handling data from 16 bit telco switches, and various 'endianness' of data), but as long as it's a known hardware, it can be worked around.
      >
      > However I was thinking that, with the droid viewer, I've got to copy my .pcg file over, and if I *could* copy the cache file/database over too as well (assuming compatible software versions) then the droid viewer doesn't even need to do that initial lookup. It becomes faster to use and, potentially, can have a smaller footprint (don't copy the .lst files over at all, everything it needs is in the cache).
      >
      > >The problem with getting binary data out to the Android device is that we don't know what permutations are necessary
      > We *do* know the permutations of the source sets, the .lst files to be used, as we've defined that when we created the .pcg that we're trying to view.
      >
      > The point here is that it's a *viewer*, it needs a .pcg file to work with. So on the PC we've done all the real work, we've created a .pcg file, and we've got a specific cache/database that works directly with that .pcg. No need to copy numerous variations of data sources around, just the one cache/database that we've used on the PC in the first place.
      > I'm thinking along the lines of saving/copying the .pcg and cache file* onto dropbox after I've created them on pc, and let the mobile device sync with that and use them.
      > *Maybe have an 'export character to mobile viewer' option, that exports the .pcg and the necessary cache data out to a compressed file that can be manually copied over.
      >
      > The main issue that Chris appears to be having is the initial memory usage at startup, due to loading the lst files, so I'm thinking that if we can alleviate that, by bypassing even that initial load (using a copy of a cache pre-built for that specific character file on the desktop), it makes the viewer more feasible, and gives us a double saving from the caching process.
      >
      > Cheers
      > Dave
      >
      > --- In pcgen_developers@yahoogroups.com, Tom Parker  wrote:
      > >
      > > On #2:
      > >
      > > Think of it this way:
      > >
      > >
      > > I load the RSRD with custom changes 1.  A cache is built.  It stores (among many other things) "Toughness" with my custom .MODs.  The database table Feats looks something like:
      > > Key        ... Dataset ... Display Name ...
      > > Toughness  ...  04x3555  ... Toughness  ... 
      > >
      > > I load the RSRD with custom changes 2.  A cache is built.  It stores (among many other things) "Toughness" with my custom MODs, which are incompatible with custom changes 1.  The database table Feats looks something like:
      > > Key        ... Dataset ... Display Name ...
      > > Toughness  ...  04x3555  ... Toughness  ... 
      > > Toughness  ...  7t592nx  ... Toughness  ... 
      > >
      > > In another table we now have:
      > > File ... Dataset ... Last Modification
      > > .LST ... 04x3555 ... Sun Jan 27, 2013 03:04:01 EDT
      > > .LST ... 7t592nx ... Sat Jan 1, 2011 13:43:51 EDT
      > > ...lots of RSRD entries tagged to 04x3555
      > > ...lots of RSRD entries tagged to 7t592nx
      > >
      > > There are lots of other tables (I'll leave out the implementation detail on that for now)  [*Note also the table would probably not be Feats, it would probably be Abilities, and we would have a column for Category, but this is trying to be explanatory, not architectural]
      > >
      > > Note a few things here:
      > > (1) Both of the loads ended up in one database.  In order to extract the "active" entries for the current loaded sources, we do NOT do:
      > > Select Key from Feats
      > > We do something like:
      > > Select Key from Feats where Dataset="04x3555"
      > >
      > > The 'where Dataset="04x3555"' would simply be tacked on to just about every query we do.
      > >
      > > One alternative is to do something more akin to:
      > >
      > > I load the RSRD with custom changes 1.  A cache is built.  It stores
      > > (among many other things) "Toughness" with my custom .MODs.  The
      > > database table Feats_04c3555 looks something like:
      > > Key        ... Display Name ...
      > > Toughness  ...  Toughness  ... 
      > >
      > > I load the RSRD with custom changes 2.  A cache is built.  It stores
      > > (among many other things) "Toughness" with my custom MODs, which are
      > > incompatible with custom changes 1.  The database Feats_7t592nx looks something like:
      > > Key        ... Display Name ...
      > > Toughness  ...  Toughness  ... 
      > >
      > > Note now that we can do:
      > > Select Key from Feats_04c3555
      > >
      > > There is no "where" restriction on the query... However, we still have dataset identified - now in the table name.
      > >
      > > Another alternative would be to literally have multiple locations on disk, so the query would be:
      > > Select Key from Feats
      > >
      > > But the Connection that was created to the SQL database would have been something like:
      > > /Dataset_04x3555
      > >
      > > Note in all cases we have to have an identifier of what combination of sources are loaded.  It's just a matter of where that is stored and how it is handled.  In all of the cases there is an item that has to be "carried" (either the dataset ID or the connection object), so none have an advantage there.   In all cases, the query to extract objects is simple and there is no "resolution" of .MODs when the load is requested - they are always resolved as they were written into the database. 
      > >
      > > As far as differences: The third option has a significant disk space penalty, and the second one has a similar albeit less worse version of the same thing.  The first one has to have the indexes on the tables built on both the key column as well as the Dataset column.
      > >
      > > Of course, that assumes we build the tables and SQL.  There are probably other options.
      > >
      > > On #3:
      > >
      > > I think I agree, but there are some nuances here I'm not sure about based on your text.
      > >
      > > My point more was to have the mobile application "lock up" once the first time a data combination is loaded and then never "lock up" again.  The problem with getting binary data out to the Android device is that we don't know what permutations are necessary.  If we tried to do ALL possible combinations, we'd die of disk space use.  So it needs to be dynamic based on what is on the device, and I think it's far better to distribute the LST and have the compile done on the Android device rather than trying to have a remote PCGen do the compile and have the data downloaded.  Having a remote PCGen has two issues: 1) Not all of us live next to a 4G tower! 2) Binary compatibility.  Even if the libraries are "shared", the specific build could be different and thus incompatible.  So I'd strongly push to have the cache never shared, ever.  If a mobile version wants a cache, it can build it, using the CPU on the mobile device.  One would hope the logic is
      > >  shared with the desktop PCGen so we don't have to write two caching systems, but sharing gets ugly, under almost any circumstance.
      > >
      > > TP.
      > > --
      > >
      > > Tom Parker
      > >
      > >
      > >
      > > ________________________________
      > >  From: FerretDave
      > > To: pcgen_developers@yahoogroups.com
      > > Sent: Sunday, January 27, 2013 6:56 PM
      > > Subject: [pcgen_developers] Re: Data Loading, Memory, and Data Usage
      > > 
      > > Greetings,
      > > Cool, it should just happen in the background and nobodies aware of it (other than faster performance). Good to know the .LST remains as is - thanks.
      > >
      > > 1) Note however that when I refer to a 'cache file', that's just terminology for now, it could be a database, it could be a binary representation of the in-memory database, it could be a set of indexes back to the original .lST files, just terminology... The point is that there is 'something' that lets me do a 'source load' faster than loading up all the .lst files as at present.
      > > I don't know the source, and haven't a clue what you're talking about with 'graphs' in java (I've done moderately little java programming so far, I'm a C/Unix/Oracle/MySQL guy) so I'm just throwing ideas out here to help you consider alternative options :-)
      > >
      > > 2) 'two' comes in somewhere around here, but I realise my response rambles a bit and covers a few topics as one...
      > >
      > > Different databases - my reasoning is that, out of the many sources PCGen has, I only use about 5, and then a few dozen of my own homebrew. And for the 4 or so campaigns I run, 2 of them use a few more of my homebrew, and only 1 has psionics.
      > > Of my homebrew, two specifically make changes that conflict with each other (It's to do with fudging support for sources that expect psionics to be defined, basically I can load one or the other).
      > >
      > > I'll assume that only sources I actually load myself are updated into the database/cache/index (I'll use any of these words interchangeably from now) as it would be inefficient to load in *all* sources that PCGen uses.
      > > Then, if there's only one database, my two conflicting sources are both loaded into it, and at start up, based on which sources I'm loading, that database has to process one or the other, and there's still some processing to do at loading time.
      > > If there was one database for one set of sources, and a separate database for the second set of sources, then each entire database is valid, and can be designed more efficiently.
      > >
      > > Basically, if I .mod something differently in two different sources, the single database method has to note the original definition, and then both of the .mod variants, and apply the .mod appropriate to the source I'm loading. Whereas the multiple database method is storing the final .MODed variant appropriate to that source selection.
      > >
      > > Yes, most of the data in the two databases ends up being the same, but the load time is quicker for the second situation as there's absolutely no data processing (MOD etc) to be done.
      > >
      > > 2.5) I guess, the difference in opinion could be around what are you storing in the database ?
      > >
      > > I see the current process as: an 'item' is loaded from a file into memory, and a .mod on that item is loaded from another file, and the item in memory is then changed to reflect its modified status.
      > > 'in memory' there is no indication now of the original (unmodified) item without restarting PCGen and loading only the original .lst file.
      > >
      > > I'm envisaging that the modified item entry is what we'd store in the database. It may be that what you're describing is to store both the item, and the .mod for that item ?
      > >
      > > 3) Your later comments on binary compatibility and different versions, I fully agree with, any and all cache/database data is to be wiped out with an upgrade, it *is* for 'internal' use only.
      > >
      > > However, what got this thread started (or got me onto it anyway) was the discussion on the android viewer, *if* the cache/database could be referenced by that viewer (using pcgen code, doing it with the 'internals'), then that viewer becomes more efficient as well. I'm envisaging that viewer eventually being part of the main code and thus using the same libraries to the access the data in a 'legal' way.
      > >
      > > 4) an option to limit disk space usage too would be appropriate.
      > >
      > > As I said, I'm just trying to bounce ideas to help... :-)
      > >
      > > Cheers
      > > Dave
      > >
      > >
      > >
      > >
      > > --- In pcgen_developers@yahoogroups.com, Tom Parker  wrote:
      > > >
      > > >
      > > >
      > > > For clarity: This project is intended for the code.  Not for users to use and not for the data team to ever be aware it exists (other than their repeated loads just got faster).  The proposal is not to change the LST format *at all*.  You still have LST files and those can still be shared.  The database is *solely* acting as a cache for the code, not as a storage mechanism for the data team.
      > > >
      > > > A few comments:
      > > >
      > > >
      > > > 1) "A different set of sources get saved to a different cache file."
      > > >
      > > >
      > > > You suggest storing in different databases vs. one.  Multiple databases is less efficient in terms of disk space and can get tricky at times to keep straight (because either the disk location or the table names have to be dynamically built).  IMHO, it's easier to have one set of shared tables with two columns (campaign ID and item key) as the primary key than having separate databases.  Since we need the campaign file database to store the last file modification time anyway, having a table for the campaign ID is not a big deal.  With properly built indicies the query time differences between the two models would be immaterial.
      > > >
      > > >
      > > > 2) So, in PCGen I specify my sources, and these get loaded up into memory
      > > > (all as at present), and then that memory database gets written out to a
      > > >  cache file named or identified by the set of sources.
      > > >
      > > > I see no reason for two steps here.  The load could (and for purposes of simplicity, probably should) go directly into the database.  Also, having a separately named cache file then gives us a binary file format to maintain.  I am vehemently against this as it creates a huge burden for questionable value.  It also raises the issue of re-loading the cache file off of the disk when you need to load sources.  This is unnecessary if the data was simply left live in the back-end database.  "Load" would be simply be a series of queries to build the master object index.
      > > >
      > > > The minimal amount of new creation here is the entries into the database.  If folks want to inspect or share the data, use the PCC/LST files.
      > > >
      > > > 3) "To view the character, you then only need the .pcg and the related cache file as 'data'..."
      > > >
      > > > Correct, but to reiterate the point from #2 and to clarify my position, I would not be enabling any method to share the cache (because there are no cache files that PCGen comprehends - they are all in the internal file format of whatever our back-end database is).  They exist solely as entries in a database.  These would always be version specific, and would never be intended to be shared.  I believe the code should be actively working to prevent such usage (install of a new version of PCGen on top of an old one should detect the old tables and delete them in their entirety).  It is just WAY too complicated to maintain that binary compatibility and when the data is solely a cache, no value in maintaining compatibility.  The compatibility is at the human readable PCG and LST layer, and we should keep it that way.  As a side note, the user should not be attempting to share an Apache Derby database with another person.  Hopefully no one would
      > really
      > > >  try that...
      > > >
      > > > 4) "Possibly want some option to [automatically?] delete 'old' cache files
      > > > if they havent been used for a while (that random time I selected all
      > > > the wrong sources and then changed my mind)"
      > > >
      > > > Some sort of aging mechanism (or ability to delete a specific cached set) is a good idea.
      > > >  
      > > > TP.
      > > >
      > > > --
      > > > Tom Parker
      > > >
      > > >
      > > >
      > > > ________________________________
      > > >  From: FerretDave
      > > > To: pcgen_developers@yahoogroups.com
      > > > Sent: Friday, January 25, 2013 4:24 PM
      > > > Subject: [pcgen_developers] Re: Data Loading, Memory, and Data Usage
      > > > 
      > > > Greetings,
      > > >
      > > > While I like databases... I do like the ease of editing of .lst files for homebrew, along with the simplicity of copying a few files around to share my homebrew with my group. And using your compiling analogy, it's easier to share source code than binaries...
      > > >
      > > > So rather than a lot of work moving the .lst files into a database, I like the idea of caching the data set for a combination of sources (that the user is actually using).
      > > >
      > > > So, in PCGen I specify my sources, and these get loaded up into memory (all as at present), and then that memory database gets written out to a cache file named or identified by the set of sources.
      > > > A different set of sources get saved to a different cache file.
      > > >
      > > > Next time I start pcgen, it checks if any of the .lst/.pcc files used by any cache file are newer than the cache, and if so, deletes the specific cache file. (the exact process 'make' uses to figure out what files to compile)
      > > >
      > > > When I specify my sources and click load, if there's a matching cache file, it loads that instead of processing the .lst/.pcc files.
      > > >
      > > > Sounds like a very efficient process, the first time around the user gets the exact same load speed they have now*, and subsequent loads are significantly faster. (at the expense of a bit of disk space for the cache files).
      > > >
      > > > Homebrew editors can hack away as required, never fearing a cache being out of date.
      > > >
      > > > The initial install takes the same amount of time/space as it's not including any cache files itself, so we never have to worry about what is the 'common' set of sources people would use, the system can dynamically build the cache for the sources *they* use.
      > > >
      > > > To view the character, you then only need the .pcg and the related cache file as 'data'...
      > > >
      > > > Possibly want some option to [automatically?] delete 'old' cache files if they havent been used for a while (that random time I selected all the wrong sources and then changed my mind)
      > > >
      > > > *Plus a minor amount of time spent writing the cache out to disk, though that could be done as a background task on a separate thread...
      > > >
      > > > Cheers
      > > > Dave
      > > > --- In pcgen_developers@yahoogroups.com, Tom Parker  wrote:
      > > > >
      > > > >
      > > > >
      > > > > I'm going to echo and expand upon a conversation I had some time ago (a few years maybe??) around data loading with another developer (Joe if I recall).  I believe some this is on one of our lists somewhere, or maybe a code meeting, but might as well update and review.  This is intended to give some context to possible alleviations of memory and CPU usage around rules data (PCC and LST data).
      > > > >
      > > > > I've already talked about potentially putting the rules data store information into a "real" database (Apache Derby type of thing is one option, something like hibernate could be another [although I don't think our objects fit into this model well, but I'll hold that as a separate topic]).
      > > > >
      > > > > The other advantage of using a database is that certain load times could be dramatically reduced. 
      > > > >
      > > > >
      > > > > Since we're on the developers list, I can simply say that loading a set of
      > > > > PCC and LST data (assuming we are loading in its entirety) is little different from compiling a program.  We take a set of raw data (PCC and LST files) and produce a compiled version
      > > > > (today in memory).  As long as we consider a few things (I'll address
      > > > > these below) there is no reason to think this compiled version couldn't be cached, just as an object file [.class files in a JAR in the case of PCGen] is what is shipped to end users (so they don't have to recompile every time they use a program)
      > > > >
      > > > >
      > > > > So let's assume for a moment that we are using an on-disk database for storing "compiled" data.  I'm going to talk in relational database thoughts, though I'm sure that can be adapted to other forms as necessary.  Let's assume we want to cache that data between runs of PCGen so that data load becomes very fast.  What are the things we need to consider to appropriately build and update such data?
      > > > >
      > > > >
      > > > > Consider one of the main reasons why we have to uniquely load data each time:
      > > > >
      > > > > File 1:
      > > > > MyFeat <> [blah]
      > > > >
      > > > > File 2:
      > > > > MyFeat.MOD <> [more blah]
      > > > >
      > > > > File 3:
      > > > > MyFeat.COPY=MyOtherFeat <> [blarg]
      > > > >
      > > > >
      > > > > Assuming these files are in different campaigns (PCC files), then we end up with a whole potential set of interactions, very specifically considering what the loaded data represents (the answer of what MyFeat represents is different if File 2 is not loaded, etc).  So any "preloading" of data becomes rather challenging... it has to be done in context to *all* of the loaded data. 
      > > > >
      > > > >
      > > > > So one consideration is that each permutation of loaded data would have to be compiled separately.  (A compiled version of RSRD is not useful if you loaded RSRD+eclipse)
      > > > >
      > > > >
      > > > > This leads to the consideration that you'd want to have more than one cache built (in case you have characters in two different types of campaigns, you would not want to "recompile" every time you swap campaigns).  So from a database design perspective there is probably a "cached campaigns table" that stores what files are loaded with each permutation, and then the actual data tables have a "campaign id" column that shows which campaign they are relevant to...
      > > > >
      > > > > Once that is possible, there is no reason for a basic end user to have to wait for some common builds, so certain items could probably be done at install time for things like the basic
      > > > > RSRD, SRD, Pathfinder, and then done on first load for other
      > > > > custom permutations.  From a usability standpoint for end users, this would make the install process a bit slower, but might make the basic use cases much faster.  Either way, repeated loads of similar data would be significantly faster.
      > > > >
      > > > >
      > > > > For items which are CPU gated (mobile devices), this could be a huge benefit to concentrating CPU time in a known event and then not having to repeat that work.  This likely would alleviate the issue Chris highlighted with the data load effectively locking up the mobile device for some period of time.  Having the objects on-disk (on-flash really ;) ) would also alleviate at least some of the memory issues (objects would not be in memory, and if everything was written to a database, the strings would be persisted on disk and not reference the original string, so the original large strings that make up the LST files would also be cleaned up by the garbage collector)
      > > > >
      > > > >
      > > > > The next challenge is detecting a form of "cache dirty" situation where something has changed and the version in the database is old.  This is critical so that folks that develop data are not burned by changing a file and PCGen still thinking the in-database version is valid. 
      > > > >
      > > > >
      > > > > So with the desktop version (or the mobile version if a data update was received), this would in effect be detecting whether any of the loaded files has been changed since the compile has been done.  This is *exactly* like an incremental compile done by eclipse or other IDEs... the key point here being that it is much safer for us to detect a single change and force a full recompile, since there are some pretty unique subtleties if we try to contain the load to only a subset of objects or object types.
      > > > >    
      > > > >
      > > > > Why full compile with any change?  Because the alternative is hard, and fragile.
      > > > >
      > > > > Consider not just .MOD and .COPY as shown above which would be one of the subtleties.  Another would be if an ABILITY was changed to add a CHOOSE: entry, then there may be references to it in other file types that are no longer valid.  If those files are not reloaded, then those issues would not be detected.  If we start with the presumption that we assume a file being changed means everything in the file is recompiled, and then with .MODs we have to compile all of that entire type of object... and then consider that we need to deal with references to that object, we effectively end up in a situation where with a small change on an Ability in a single file, we need to compile "abilities and everything that can reference an ability"... so we are talking *everything* with any change. 
      > > > >
      > > > >
      > > > > The more narrow case of detecting specifically what in the file has changed and looking up what references that item gets very similar to the "load only items for a specific PC" type design which requires significantly more work (it becomes random access to the files) and is much more fragile.  To this day, I have seen both NetBeans and Eclipse mess up incremental compiles due to some very subtle situations, so I really don't think trying to do a true incremental compile is worth the effort/risk.
      > > > >
      > > > >
      > > > > So net is that I think relying on an underlying database enables some pretty interesting capabilities that would help both the desktop and any mobile implementation.  This is one of my "someday" projects (though pretty far down on the list which is why I don't talk about it much - managing expectations about when things will happen is always hard), so I'd definitely be interested in seeing this progress and be integrated into PCGen.  Also, by alleviating the memory issues while still having the full set of data available, it keeps a mobile implementation much closer to the desktop version, which (even if not initially implemented) keeps the option open to edit the PC in the mobile version, as well as dealing with some other subtleties without having to reinvent the wheel.
      > > > >
      > > > > Reinvent what?  Consider temporary bonuses.  To be useful in the mobile version, it means any object with a TEMPBONUS would need to be loaded on the mobile version *regardless of whether it is actually on the PC*... so there is still a pretty significant data scanning impact and the "random loader" discussed in the other thread is not sufficient [which is why the index I talked about there was step 1 of 3 or 4 - the TEMPBONUS objects would be one of those later steps, probably 3 or 4]... Otherwise the PC could only do "self" and not get effects from other party members.
      > > > >
      > > > >
      > > > > Obviously the first step would be discussions around what type of database we use and tradeoffs of those options.  I'll have opinions [always do ;) ], but do not have a full sense of the considerations, so am open to proposals and would appreciate help on the tradeoffs if anyone want to make recommendations or do analysis.
      > > > >
      > > > > So Chris, you asked earlier if I had preferred directions on things. If I had to choose, this is probably one of top things (if not the top) that I think could benefit both you and desktop PCGen... and therefore where I would point and direct your efforts if I had my druthers.
      > > > >
      > > > >
      > > > > TP.
      > > > >
      > > > > --
      > > > > Tom Parker
      > >
      >
      >
      >
      >
      > ------------------------------------
      >
      > Yahoo! Groups Links
      >
      >
      >
      >     http://docs.yahoo.com/info/terms/
      >
    • FerretDave
      Greetings, Agreed - time can be spent better on improving the desktop side first :-) (says a man without a smart phone/tablet) Binary representation - we can
      Message 35 of 35 , Jan 30, 2013
      • 0 Attachment
        Greetings,
        Agreed - time can be spent better on improving the desktop side first :-) (says a man without a smart phone/tablet)

        Binary representation - we can check the 'version' of that and abort if it's different to prevent compatibility issues.

        All of this is a nice to have :-)

        Thanks Tom, I'll retreat under my rock and get back to real work now...

        Cheers
        Dave


        --- In pcgen_developers@yahoogroups.com, Tom Parker wrote:
        >
        >
        >
        > Forgot to respond to this:
        >
        > "I'm thinking along the lines of saving/copying the .pcg and cache file*
        > onto dropbox after I've created them on pc, and let the mobile device
        > sync with that and use them."
        >
        > As best I can tell, this assumes one of two things:
        > (1) The binary representation used for a database is the same on the desktop and the mobile version (not safe and not something I want to debug)
        >
        > or
        >
        >
        > (2) Someone has written some form of binary I/O system, which is what I was trying to avoid in keeping the cache "local". 
        >
        >
        > I realize from an ease of use perspective, what you are proposing sounds nice.  However, I believe this is neither the best place to focus our time, nor an infrastructure we want to (or even can) maintain/update/debug over time. 
        >
        >
        > If we had a huge influx of new developers, my opinion might change, but frankly there are some pretty major pieces of code we have that are tenuously maintained already and that slow down my ability to change the core (NPC Generator comes to mind here).  I'd rather not add another...
        >
        >
        > TP.
        >
        > --
        > Tom Parker
        >
        >
        >
        > ________________________________
        > From: FerretDave
        > To: pcgen_developers@yahoogroups.com
        > Sent: Tuesday, January 29, 2013 4:58 AM
        > Subject: [pcgen_developers] Re: Data Loading, Memory, and Data Usage
        >
        > Greetings,
        > ok, each variant of a mod'd item is stored, cool, so you're keeping cached copies of things per 'group of sources' loaded.
        > Whether that is, as you say, a query with a where clause on one table, a query on a table named (relatively) to the source list, or is in a table in a database named (relatively) to the source list, those are all final design decisions.
        > I think my original line of thought was around only caching the sources that are actively used, so we're on the same lines.
        >
        > >Note ...we have to have an identifier...a matter of where that is stored...(either the dataset ID or the connection object), so none have an advantage there
        >
        > I'd actually say that they have different advantages...whether one is *better* than the others can vary:
        >
        > The second/third options (named tables/database) will perform slightly faster than the first, as they will process all data, whereas the first, with an identifier column, will not only perform (fractionally) slower due to the where clause, but due to the extra column would also take more disk space. 
        > The overhead in disk space for an extra table/database is minimal, but there will be a point - based on number of different source combinations used - at which the first option becomes more efficient spacewise than the others, though most users use only a couple of combinations of sources?
        >
        > There's not a lot in it really though, and it really does depend a lot on the final architecture of such cache/database. I guess my view is that while the separate tables/database is a bit more effort, the end result is that performance is slightly better, and that's the final goal.
        > This is assuming an SQL based approach with tables, as you say, other methods are possible.
        >
        > 3) - you say "lock up" ? did you mean "look up" ?
        > Yes, the first time the app runs (on desktop or 'droid), the cache/database is built, and thereafter the lookup/load of the .lst files is not needed any more (until extra sources are loaded or a newer version of anything is installed).
        > I too am wary of copying binary data around (I work with 64 bit unix and 32 bit linux, handling data from 16 bit telco switches, and various 'endianness' of data), but as long as it's a known hardware, it can be worked around.
        >
        > However I was thinking that, with the droid viewer, I've got to copy my .pcg file over, and if I *could* copy the cache file/database over too as well (assuming compatible software versions) then the droid viewer doesn't even need to do that initial lookup. It becomes faster to use and, potentially, can have a smaller footprint (don't copy the .lst files over at all, everything it needs is in the cache).
        >
        > >The problem with getting binary data out to the Android device is that we don't know what permutations are necessary
        > We *do* know the permutations of the source sets, the .lst files to be used, as we've defined that when we created the .pcg that we're trying to view.
        >
        > The point here is that it's a *viewer*, it needs a .pcg file to work with. So on the PC we've done all the real work, we've created a .pcg file, and we've got a specific cache/database that works directly with that .pcg. No need to copy numerous variations of data sources around, just the one cache/database that we've used on the PC in the first place.
        > I'm thinking along the lines of saving/copying the .pcg and cache file* onto dropbox after I've created them on pc, and let the mobile device sync with that and use them.
        > *Maybe have an 'export character to mobile viewer' option, that exports the .pcg and the necessary cache data out to a compressed file that can be manually copied over.
        >
        > The main issue that Chris appears to be having is the initial memory usage at startup, due to loading the lst files, so I'm thinking that if we can alleviate that, by bypassing even that initial load (using a copy of a cache pre-built for that specific character file on the desktop), it makes the viewer more feasible, and gives us a double saving from the caching process.
        >
        > Cheers
        > Dave
        >
        > --- In pcgen_developers@yahoogroups.com, Tom Parker  wrote:
        > >
        > > On #2:
        > >
        > > Think of it this way:
        > >
        > >
        > > I load the RSRD with custom changes 1.  A cache is built.  It stores (among many other things) "Toughness" with my custom .MODs.  The database table Feats looks something like:
        > > Key        ... Dataset ... Display Name ...
        > > Toughness  ...  04x3555  ... Toughness  ... 
        > >
        > > I load the RSRD with custom changes 2.  A cache is built.  It stores (among many other things) "Toughness" with my custom MODs, which are incompatible with custom changes 1.  The database table Feats looks something like:
        > > Key        ... Dataset ... Display Name ...
        > > Toughness  ...  04x3555  ... Toughness  ... 
        > > Toughness  ...  7t592nx  ... Toughness  ... 
        > >
        > > In another table we now have:
        > > File ... Dataset ... Last Modification
        > > .LST ... 04x3555 ... Sun Jan 27, 2013 03:04:01 EDT
        > > .LST ... 7t592nx ... Sat Jan 1, 2011 13:43:51 EDT
        > > ...lots of RSRD entries tagged to 04x3555
        > > ...lots of RSRD entries tagged to 7t592nx
        > >
        > > There are lots of other tables (I'll leave out the implementation detail on that for now)  [*Note also the table would probably not be Feats, it would probably be Abilities, and we would have a column for Category, but this is trying to be explanatory, not architectural]
        > >
        > > Note a few things here:
        > > (1) Both of the loads ended up in one database.  In order to extract the "active" entries for the current loaded sources, we do NOT do:
        > > Select Key from Feats
        > > We do something like:
        > > Select Key from Feats where Dataset="04x3555"
        > >
        > > The 'where Dataset="04x3555"' would simply be tacked on to just about every query we do.
        > >
        > > One alternative is to do something more akin to:
        > >
        > > I load the RSRD with custom changes 1.  A cache is built.  It stores
        > > (among many other things) "Toughness" with my custom .MODs.  The
        > > database table Feats_04c3555 looks something like:
        > > Key        ... Display Name ...
        > > Toughness  ...  Toughness  ... 
        > >
        > > I load the RSRD with custom changes 2.  A cache is built.  It stores
        > > (among many other things) "Toughness" with my custom MODs, which are
        > > incompatible with custom changes 1.  The database Feats_7t592nx looks something like:
        > > Key        ... Display Name ...
        > > Toughness  ...  Toughness  ... 
        > >
        > > Note now that we can do:
        > > Select Key from Feats_04c3555
        > >
        > > There is no "where" restriction on the query... However, we still have dataset identified - now in the table name.
        > >
        > > Another alternative would be to literally have multiple locations on disk, so the query would be:
        > > Select Key from Feats
        > >
        > > But the Connection that was created to the SQL database would have been something like:
        > > /Dataset_04x3555
        > >
        > > Note in all cases we have to have an identifier of what combination of sources are loaded.  It's just a matter of where that is stored and how it is handled.  In all of the cases there is an item that has to be "carried" (either the dataset ID or the connection object), so none have an advantage there.   In all cases, the query to extract objects is simple and there is no "resolution" of .MODs when the load is requested - they are always resolved as they were written into the database. 
        > >
        > > As far as differences: The third option has a significant disk space penalty, and the second one has a similar albeit less worse version of the same thing.  The first one has to have the indexes on the tables built on both the key column as well as the Dataset column.
        > >
        > > Of course, that assumes we build the tables and SQL.  There are probably other options.
        > >
        > > On #3:
        > >
        > > I think I agree, but there are some nuances here I'm not sure about based on your text.
        > >
        > > My point more was to have the mobile application "lock up" once the first time a data combination is loaded and then never "lock up" again.  The problem with getting binary data out to the Android device is that we don't know what permutations are necessary.  If we tried to do ALL possible combinations, we'd die of disk space use.  So it needs to be dynamic based on what is on the device, and I think it's far better to distribute the LST and have the compile done on the Android device rather than trying to have a remote PCGen do the compile and have the data downloaded.  Having a remote PCGen has two issues: 1) Not all of us live next to a 4G tower! 2) Binary compatibility.  Even if the libraries are "shared", the specific build could be different and thus incompatible.  So I'd strongly push to have the cache never shared, ever.  If a mobile version wants a cache, it can build it, using the CPU on the mobile device.  One would hope the logic is
        > >  shared with the desktop PCGen so we don't have to write two caching systems, but sharing gets ugly, under almost any circumstance.
        > >
        > > TP.
        > > --
        > >
        > > Tom Parker
        > >
        > >
        > >
        > > ________________________________
        > >  From: FerretDave
        > > To: pcgen_developers@yahoogroups.com
        > > Sent: Sunday, January 27, 2013 6:56 PM
        > > Subject: [pcgen_developers] Re: Data Loading, Memory, and Data Usage
        > > 
        > > Greetings,
        > > Cool, it should just happen in the background and nobodies aware of it (other than faster performance). Good to know the .LST remains as is - thanks.
        > >
        > > 1) Note however that when I refer to a 'cache file', that's just terminology for now, it could be a database, it could be a binary representation of the in-memory database, it could be a set of indexes back to the original .lST files, just terminology... The point is that there is 'something' that lets me do a 'source load' faster than loading up all the .lst files as at present.
        > > I don't know the source, and haven't a clue what you're talking about with 'graphs' in java (I've done moderately little java programming so far, I'm a C/Unix/Oracle/MySQL guy) so I'm just throwing ideas out here to help you consider alternative options :-)
        > >
        > > 2) 'two' comes in somewhere around here, but I realise my response rambles a bit and covers a few topics as one...
        > >
        > > Different databases - my reasoning is that, out of the many sources PCGen has, I only use about 5, and then a few dozen of my own homebrew. And for the 4 or so campaigns I run, 2 of them use a few more of my homebrew, and only 1 has psionics.
        > > Of my homebrew, two specifically make changes that conflict with each other (It's to do with fudging support for sources that expect psionics to be defined, basically I can load one or the other).
        > >
        > > I'll assume that only sources I actually load myself are updated into the database/cache/index (I'll use any of these words interchangeably from now) as it would be inefficient to load in *all* sources that PCGen uses.
        > > Then, if there's only one database, my two conflicting sources are both loaded into it, and at start up, based on which sources I'm loading, that database has to process one or the other, and there's still some processing to do at loading time.
        > > If there was one database for one set of sources, and a separate database for the second set of sources, then each entire database is valid, and can be designed more efficiently.
        > >
        > > Basically, if I .mod something differently in two different sources, the single database method has to note the original definition, and then both of the .mod variants, and apply the .mod appropriate to the source I'm loading. Whereas the multiple database method is storing the final .MODed variant appropriate to that source selection.
        > >
        > > Yes, most of the data in the two databases ends up being the same, but the load time is quicker for the second situation as there's absolutely no data processing (MOD etc) to be done.
        > >
        > > 2.5) I guess, the difference in opinion could be around what are you storing in the database ?
        > >
        > > I see the current process as: an 'item' is loaded from a file into memory, and a .mod on that item is loaded from another file, and the item in memory is then changed to reflect its modified status.
        > > 'in memory' there is no indication now of the original (unmodified) item without restarting PCGen and loading only the original .lst file.
        > >
        > > I'm envisaging that the modified item entry is what we'd store in the database. It may be that what you're describing is to store both the item, and the .mod for that item ?
        > >
        > > 3) Your later comments on binary compatibility and different versions, I fully agree with, any and all cache/database data is to be wiped out with an upgrade, it *is* for 'internal' use only.
        > >
        > > However, what got this thread started (or got me onto it anyway) was the discussion on the android viewer, *if* the cache/database could be referenced by that viewer (using pcgen code, doing it with the 'internals'), then that viewer becomes more efficient as well. I'm envisaging that viewer eventually being part of the main code and thus using the same libraries to the access the data in a 'legal' way.
        > >
        > > 4) an option to limit disk space usage too would be appropriate.
        > >
        > > As I said, I'm just trying to bounce ideas to help... :-)
        > >
        > > Cheers
        > > Dave
        > >
        > >
        > >
        > >
        > > --- In pcgen_developers@yahoogroups.com, Tom Parker  wrote:
        > > >
        > > >
        > > >
        > > > For clarity: This project is intended for the code.  Not for users to use and not for the data team to ever be aware it exists (other than their repeated loads just got faster).  The proposal is not to change the LST format *at all*.  You still have LST files and those can still be shared.  The database is *solely* acting as a cache for the code, not as a storage mechanism for the data team.
        > > >
        > > > A few comments:
        > > >
        > > >
        > > > 1) "A different set of sources get saved to a different cache file."
        > > >
        > > >
        > > > You suggest storing in different databases vs. one.  Multiple databases is less efficient in terms of disk space and can get tricky at times to keep straight (because either the disk location or the table names have to be dynamically built).  IMHO, it's easier to have one set of shared tables with two columns (campaign ID and item key) as the primary key than having separate databases.  Since we need the campaign file database to store the last file modification time anyway, having a table for the campaign ID is not a big deal.  With properly built indicies the query time differences between the two models would be immaterial.
        > > >
        > > >
        > > > 2) So, in PCGen I specify my sources, and these get loaded up into memory
        > > > (all as at present), and then that memory database gets written out to a
        > > >  cache file named or identified by the set of sources.
        > > >
        > > > I see no reason for two steps here.  The load could (and for purposes of simplicity, probably should) go directly into the database.  Also, having a separately named cache file then gives us a binary file format to maintain.  I am vehemently against this as it creates a huge burden for questionable value.  It also raises the issue of re-loading the cache file off of the disk when you need to load sources.  This is unnecessary if the data was simply left live in the back-end database.  "Load" would be simply be a series of queries to build the master object index.
        > > >
        > > > The minimal amount of new creation here is the entries into the database.  If folks want to inspect or share the data, use the PCC/LST files.
        > > >
        > > > 3) "To view the character, you then only need the .pcg and the related cache file as 'data'..."
        > > >
        > > > Correct, but to reiterate the point from #2 and to clarify my position, I would not be enabling any method to share the cache (because there are no cache files that PCGen comprehends - they are all in the internal file format of whatever our back-end database is).  They exist solely as entries in a database.  These would always be version specific, and would never be intended to be shared.  I believe the code should be actively working to prevent such usage (install of a new version of PCGen on top of an old one should detect the old tables and delete them in their entirety).  It is just WAY too complicated to maintain that binary compatibility and when the data is solely a cache, no value in maintaining compatibility.  The compatibility is at the human readable PCG and LST layer, and we should keep it that way.  As a side note, the user should not be attempting to share an Apache Derby database with another person.  Hopefully no one would
        > really
        > > >  try that...
        > > >
        > > > 4) "Possibly want some option to [automatically?] delete 'old' cache files
        > > > if they havent been used for a while (that random time I selected all
        > > > the wrong sources and then changed my mind)"
        > > >
        > > > Some sort of aging mechanism (or ability to delete a specific cached set) is a good idea.
        > > >  
        > > > TP.
        > > >
        > > > --
        > > > Tom Parker
        > > >
        > > >
        > > >
        > > > ________________________________
        > > >  From: FerretDave
        > > > To: pcgen_developers@yahoogroups.com
        > > > Sent: Friday, January 25, 2013 4:24 PM
        > > > Subject: [pcgen_developers] Re: Data Loading, Memory, and Data Usage
        > > > 
        > > > Greetings,
        > > >
        > > > While I like databases... I do like the ease of editing of .lst files for homebrew, along with the simplicity of copying a few files around to share my homebrew with my group. And using your compiling analogy, it's easier to share source code than binaries...
        > > >
        > > > So rather than a lot of work moving the .lst files into a database, I like the idea of caching the data set for a combination of sources (that the user is actually using).
        > > >
        > > > So, in PCGen I specify my sources, and these get loaded up into memory (all as at present), and then that memory database gets written out to a cache file named or identified by the set of sources.
        > > > A different set of sources get saved to a different cache file.
        > > >
        > > > Next time I start pcgen, it checks if any of the .lst/.pcc files used by any cache file are newer than the cache, and if so, deletes the specific cache file. (the exact process 'make' uses to figure out what files to compile)
        > > >
        > > > When I specify my sources and click load, if there's a matching cache file, it loads that instead of processing the .lst/.pcc files.
        > > >
        > > > Sounds like a very efficient process, the first time around the user gets the exact same load speed they have now*, and subsequent loads are significantly faster. (at the expense of a bit of disk space for the cache files).
        > > >
        > > > Homebrew editors can hack away as required, never fearing a cache being out of date.
        > > >
        > > > The initial install takes the same amount of time/space as it's not including any cache files itself, so we never have to worry about what is the 'common' set of sources people would use, the system can dynamically build the cache for the sources *they* use.
        > > >
        > > > To view the character, you then only need the .pcg and the related cache file as 'data'...
        > > >
        > > > Possibly want some option to [automatically?] delete 'old' cache files if they havent been used for a while (that random time I selected all the wrong sources and then changed my mind)
        > > >
        > > > *Plus a minor amount of time spent writing the cache out to disk, though that could be done as a background task on a separate thread...
        > > >
        > > > Cheers
        > > > Dave
        > > > --- In pcgen_developers@yahoogroups.com, Tom Parker  wrote:
        > > > >
        > > > >
        > > > >
        > > > > I'm going to echo and expand upon a conversation I had some time ago (a few years maybe??) around data loading with another developer (Joe if I recall).  I believe some this is on one of our lists somewhere, or maybe a code meeting, but might as well update and review.  This is intended to give some context to possible alleviations of memory and CPU usage around rules data (PCC and LST data).
        > > > >
        > > > > I've already talked about potentially putting the rules data store information into a "real" database (Apache Derby type of thing is one option, something like hibernate could be another [although I don't think our objects fit into this model well, but I'll hold that as a separate topic]).
        > > > >
        > > > > The other advantage of using a database is that certain load times could be dramatically reduced. 
        > > > >
        > > > >
        > > > > Since we're on the developers list, I can simply say that loading a set of
        > > > > PCC and LST data (assuming we are loading in its entirety) is little different from compiling a program.  We take a set of raw data (PCC and LST files) and produce a compiled version
        > > > > (today in memory).  As long as we consider a few things (I'll address
        > > > > these below) there is no reason to think this compiled version couldn't be cached, just as an object file [.class files in a JAR in the case of PCGen] is what is shipped to end users (so they don't have to recompile every time they use a program)
        > > > >
        > > > >
        > > > > So let's assume for a moment that we are using an on-disk database for storing "compiled" data.  I'm going to talk in relational database thoughts, though I'm sure that can be adapted to other forms as necessary.  Let's assume we want to cache that data between runs of PCGen so that data load becomes very fast.  What are the things we need to consider to appropriately build and update such data?
        > > > >
        > > > >
        > > > > Consider one of the main reasons why we have to uniquely load data each time:
        > > > >
        > > > > File 1:
        > > > > MyFeat <> [blah]
        > > > >
        > > > > File 2:
        > > > > MyFeat.MOD <> [more blah]
        > > > >
        > > > > File 3:
        > > > > MyFeat.COPY=MyOtherFeat <> [blarg]
        > > > >
        > > > >
        > > > > Assuming these files are in different campaigns (PCC files), then we end up with a whole potential set of interactions, very specifically considering what the loaded data represents (the answer of what MyFeat represents is different if File 2 is not loaded, etc).  So any "preloading" of data becomes rather challenging... it has to be done in context to *all* of the loaded data. 
        > > > >
        > > > >
        > > > > So one consideration is that each permutation of loaded data would have to be compiled separately.  (A compiled version of RSRD is not useful if you loaded RSRD+eclipse)
        > > > >
        > > > >
        > > > > This leads to the consideration that you'd want to have more than one cache built (in case you have characters in two different types of campaigns, you would not want to "recompile" every time you swap campaigns).  So from a database design perspective there is probably a "cached campaigns table" that stores what files are loaded with each permutation, and then the actual data tables have a "campaign id" column that shows which campaign they are relevant to...
        > > > >
        > > > > Once that is possible, there is no reason for a basic end user to have to wait for some common builds, so certain items could probably be done at install time for things like the basic
        > > > > RSRD, SRD, Pathfinder, and then done on first load for other
        > > > > custom permutations.  From a usability standpoint for end users, this would make the install process a bit slower, but might make the basic use cases much faster.  Either way, repeated loads of similar data would be significantly faster.
        > > > >
        > > > >
        > > > > For items which are CPU gated (mobile devices), this could be a huge benefit to concentrating CPU time in a known event and then not having to repeat that work.  This likely would alleviate the issue Chris highlighted with the data load effectively locking up the mobile device for some period of time.  Having the objects on-disk (on-flash really ;) ) would also alleviate at least some of the memory issues (objects would not be in memory, and if everything was written to a database, the strings would be persisted on disk and not reference the original string, so the original large strings that make up the LST files would also be cleaned up by the garbage collector)
        > > > >
        > > > >
        > > > > The next challenge is detecting a form of "cache dirty" situation where something has changed and the version in the database is old.  This is critical so that folks that develop data are not burned by changing a file and PCGen still thinking the in-database version is valid. 
        > > > >
        > > > >
        > > > > So with the desktop version (or the mobile version if a data update was received), this would in effect be detecting whether any of the loaded files has been changed since the compile has been done.  This is *exactly* like an incremental compile done by eclipse or other IDEs... the key point here being that it is much safer for us to detect a single change and force a full recompile, since there are some pretty unique subtleties if we try to contain the load to only a subset of objects or object types.
        > > > >    
        > > > >
        > > > > Why full compile with any change?  Because the alternative is hard, and fragile.
        > > > >
        > > > > Consider not just .MOD and .COPY as shown above which would be one of the subtleties.  Another would be if an ABILITY was changed to add a CHOOSE: entry, then there may be references to it in other file types that are no longer valid.  If those files are not reloaded, then those issues would not be detected.  If we start with the presumption that we assume a file being changed means everything in the file is recompiled, and then with .MODs we have to compile all of that entire type of object... and then consider that we need to deal with references to that object, we effectively end up in a situation where with a small change on an Ability in a single file, we need to compile "abilities and everything that can reference an ability"... so we are talking *everything* with any change. 
        > > > >
        > > > >
        > > > > The more narrow case of detecting specifically what in the file has changed and looking up what references that item gets very similar to the "load only items for a specific PC" type design which requires significantly more work (it becomes random access to the files) and is much more fragile.  To this day, I have seen both NetBeans and Eclipse mess up incremental compiles due to some very subtle situations, so I really don't think trying to do a true incremental compile is worth the effort/risk.
        > > > >
        > > > >
        > > > > So net is that I think relying on an underlying database enables some pretty interesting capabilities that would help both the desktop and any mobile implementation.  This is one of my "someday" projects (though pretty far down on the list which is why I don't talk about it much - managing expectations about when things will happen is always hard), so I'd definitely be interested in seeing this progress and be integrated into PCGen.  Also, by alleviating the memory issues while still having the full set of data available, it keeps a mobile implementation much closer to the desktop version, which (even if not initially implemented) keeps the option open to edit the PC in the mobile version, as well as dealing with some other subtleties without having to reinvent the wheel.
        > > > >
        > > > > Reinvent what?  Consider temporary bonuses.  To be useful in the mobile version, it means any object with a TEMPBONUS would need to be loaded on the mobile version *regardless of whether it is actually on the PC*... so there is still a pretty significant data scanning impact and the "random loader" discussed in the other thread is not sufficient [which is why the index I talked about there was step 1 of 3 or 4 - the TEMPBONUS objects would be one of those later steps, probably 3 or 4]... Otherwise the PC could only do "self" and not get effects from other party members.
        > > > >
        > > > >
        > > > > Obviously the first step would be discussions around what type of database we use and tradeoffs of those options.  I'll have opinions [always do ;) ], but do not have a full sense of the considerations, so am open to proposals and would appreciate help on the tradeoffs if anyone want to make recommendations or do analysis.
        > > > >
        > > > > So Chris, you asked earlier if I had preferred directions on things. If I had to choose, this is probably one of top things (if not the top) that I think could benefit both you and desktop PCGen... and therefore where I would point and direct your efforts if I had my druthers.
        > > > >
        > > > >
        > > > > TP.
        > > > >
        > > > > --
        > > > > Tom Parker
        > >
        >
        >
        >
        >
        > ------------------------------------
        >
        > Yahoo! Groups Links
        >
        >
        >
        >     http://docs.yahoo.com/info/terms/
        >
      Your message has been successfully submitted and would be delivered to recipients shortly.