Loading ...
Sorry, an error occurred while loading the content.
 

game mixing

Expand Messages
  • sd23david
    There are two sound designers and a lead at my development house and we ve recently sat down with our coder(s) to start sketching out ideas for a real-time
    Message 1 of 6 , Oct 15, 2008
      There are two sound designers and a lead at my development house and
      we've recently sat down with our coder(s) to start sketching out ideas
      for a real-time mixer. That is, a system that will (somewhat)
      intelligently do the mix for us based on parameters that we feed it.

      Specifically, the parameters are based on a priority system (A, B, C,
      D and E) where priority A sounds are given the most volume in playback
      (dialog and weapons, for example) and lesser priority sounds are given
      less volume (ambiances, footsteps, etc).

      We haven't exactly pinned down the dB values for each level, but we're
      thinking -2db attenuation per decreased level in the priority.
      Something like:

      a = -2db
      b = -4db
      c = -6db

      ..and so on.


      All the sound assets would be given equal RMS normalization, of
      course. The mixer, as I said, just adjusts the playback volume of
      those assets.

      My primary concern here is with fade times so the changes in volume
      don't sound too rapid and, also, with the possibility that volumes are
      going to sound perpetually "shifty" and dynamic all the time.

      Still, it's a system we're going to test just to see what happens.
      With the sheer number of sound assets and competing frequencies, we
      have to do create something so it doesn't turn into a cacophony of
      audio "grey goo".

      What systems and methods have you all worked with in game mixing? It's
      an interesting problem in that it's not linear like film and you can't
      anticipate every possible permutation of a basic set of sounds.
    • Andy Farnell
      On Wed, 15 Oct 2008 18:58:17 -0000 ... Hi David, Practically 40ms is the smallest time you can work with (twice the Gabor threshold), but on audio with low
      Message 2 of 6 , Oct 15, 2008
        On Wed, 15 Oct 2008 18:58:17 -0000
        "sd23david" <sd23david@...> wrote:

        > My primary concern here is with fade times so the changes in volume
        > don't sound too rapid and, also, with the possibility that volumes are
        > going to sound perpetually "shifty" and dynamic all the time.

        Hi David,

        Practically 40ms is the smallest time you can work with (twice the Gabor
        threshold), but on audio with low frequencies that will produce a noticable
        bump as it moves. As a rule of thumb any fade time should be 4 times
        the lowest period, so for 20Hz a 200ms fade is okay. Brighter, or noisy
        highpassed material will fare better.

        In a crossfade, where one sound goes to zero, the cosine/sine quadrant
        (approximating equal power) is fine. If the signal is going from one
        level to another then you need to raise the cosine and use a half of it
        (Hanning) so that you approach the new level smoothly. Alternative is a
        half Guaussian. Basically, avoid any corners as you would get from a
        linear breakpoint envelope.

        To avoid memory table lookups a quite efficient computational
        solution is just to add a first order lowpass to the control signal at
        0.25f (same as 4t, so 5Hz) to round off the corners of a linear change.
        But if you have a lot of channels these might add up to significant cost,
        so the lookup table and stored curve functions would be better.
        Avoid computing fancy curves log/exponential, because they stay CPU hungry
        even when not much is happening (as you approach the destination level).

        Those are theoretical points. In practice, for a game, I would define
        a globally visible "volatility" variable that can scale all fade curves
        so by default it's 1/4 second (250ms), but can fall to 80ms in heavy, fast
        cut action scenes, and rise to about 2 second for tranquil outdoor ambiance.

        a.

        --
        Use the source
      • David Steinwedel
        Hey David, Nice to see some discussion on this topic. It s great to see this problem getting attacked on many fronts. In the past I ve used a bus-based
        Message 3 of 6 , Oct 15, 2008
          Hey David,

          Nice to see some discussion on this topic. It's great to see this problem getting attacked on many fronts.

          In the past I've used a bus-based snapshot style mixer. The system let me set up an array of snapshot presets. Each snapshot was given a priority and the highest priority snapshot at any time wins. This style system lets you assign priority based on the game state/situation as opposed to the sound playing (sometimes the footstep is more important than the gunshot). Each snapshot had custom fade in/out times, system reverb overrides, and per-bus insert effects. 

          --D 
           
          ______________________________
          http://www.dsteinwedel.com/


          ----- Original Message ----
          From: sd23david <sd23david@...>
          To: gameaudiopro@yahoogroups.com
          Sent: Wednesday, October 15, 2008 11:58:17 AM
          Subject: [gameaudiopro] game mixing

          There are two sound designers and a lead at my development house and
          we've recently sat down with our coder(s) to start sketching out ideas
          for a real-time mixer. That is, a system that will (somewhat)
          intelligently do the mix for us based on parameters that we feed it.

          Specifically, the parameters are based on a priority system (A, B, C,
          D and E) where priority A sounds are given the most volume in playback
          (dialog and weapons, for example) and lesser priority sounds are given
          less volume (ambiances, footsteps, etc).

          We haven't exactly pinned down the dB values for each level, but we're
          thinking -2db attenuation per decreased level in the priority.
          Something like:

          a = -2db
          b = -4db
          c = -6db

          ..and so on.

          All the sound assets would be given equal RMS normalization, of
          course. The mixer, as I said, just adjusts the playback volume of
          those assets.

          My primary concern here is with fade times so the changes in volume
          don't sound too rapid and, also, with the possibility that volumes are
          going to sound perpetually "shifty" and dynamic all the time.

          Still, it's a system we're going to test just to see what happens.
          With the sheer number of sound assets and competing frequencies, we
          have to do create something so it doesn't turn into a cacophony of
          audio "grey goo".

          What systems and methods have you all worked with in game mixing? It's
          an interesting problem in that it's not linear like film and you can't
          anticipate every possible permutation of a basic set of sounds.


        • Andrew Thomas Clark
          On Full Auto on Full Auto II we did some interesting stuff with dynamic mixing. PS3 and Xbox360 audio software architecture both allow incredible busing and FX
          Message 4 of 6 , Oct 16, 2008
            On Full Auto on Full Auto II we did some interesting stuff with
            dynamic mixing.

            PS3 and Xbox360 audio software architecture both allow incredible
            busing and FX group flexibility, which lends itself really well to
            logical grouping and processing of contextually related sounds.

            1.

            The most valuable lesson learned on those projects was not to try to
            _imagine_ what the best gain and FX levels were for the various
            categories, and what kinds of fade times and curves would sound ok.
            We exposed these as much as possible as _run-time tweakables_ in a PC
            GUI interface that pushed parameters instantly to the console via
            debug channel or sockets - so the audio lead could actually mix the
            game while the game was being played.

            We even had some limited HUI support for the GUI. I can't describe
            how unbelievably awesome it was to watch those mechanical faders move
            around in response to changing game states, and to be able to tweak
            in real-time. It felt like stepping out of the dark ages. And,
            relatively speaking, I haven't even been the industry that long ;) At
            least I was editing gains in .txt files "back in the day".

            2.

            We did use state-based mix snapshots and morphs, particularly for the
            FA2 dynamic music crossfades.

            But!

            I was much more intrigued by the less rigid, more combinatorial mixes
            that emerged by focusing on giving each individual bus its own logic.
            I.e., different faders and mix groups responded to different things
            logically, but somewhat independently, so there weren't all
            necessarily forced to identify and "snap" to a global context. With
            care and sensitibity, this type of system moves away from a paradigm
            where the ideal contexts sound great but may rarely occur, and
            awkward morphs are tolerated, towards a more fluid, always-
            appropriate, ever-evolving mix.

            Cheers,
            -A

            http://www.silvershard.com
          • Andy Farnell
            These are great points Andrew. I have always been a big advocate of in-world development. Working on running code has become central to my whole coding
            Message 5 of 6 , Oct 17, 2008
              These are great points Andrew. I have always been a big
              advocate of in-world development. Working on running code
              has become central to my whole coding philosophy through
              years of work on live webservers and DSP using Pure Data.
              Separatation of design from results seems so archaic I
              often find it hard to switch back to using compilers. There's
              lots of reasons to cheer for dynamic interpreted languages in
              games development, and to make clear separation of runtime
              graphical environment from sound design using dynamically
              reroutable OSC bindings through Lua etc. The payoff in
              dev time seems obvious to me.

              All the same, no matter how many times you run through mix
              scenarios all the flying faders in the world aren't going
              to help you with a fundamental 'problem' (logical fact).
              That truly interactive real-time worlds will always present
              you with unforseen scenarios, so you will always need some
              measure of a rule-based approach, and try to define that
              as exhaustively as possible.




              On Fri, 17 Oct 2008 02:40:04 -0000
              "Andrew Thomas Clark" <andrewclarkis@...> wrote:

              > On Full Auto on Full Auto II we did some interesting stuff with
              > dynamic mixing.
              >
              > PS3 and Xbox360 audio software architecture both allow incredible
              > busing and FX group flexibility, which lends itself really well to
              > logical grouping and processing of contextually related sounds.
              >
              > 1.
              >
              > The most valuable lesson learned on those projects was not to try to
              > _imagine_ what the best gain and FX levels were for the various
              > categories, and what kinds of fade times and curves would sound ok.
              > We exposed these as much as possible as _run-time tweakables_ in a PC
              > GUI interface that pushed parameters instantly to the console via
              > debug channel or sockets - so the audio lead could actually mix the
              > game while the game was being played.
              >
              > We even had some limited HUI support for the GUI. I can't describe
              > how unbelievably awesome it was to watch those mechanical faders move
              > around in response to changing game states, and to be able to tweak
              > in real-time. It felt like stepping out of the dark ages. And,
              > relatively speaking, I haven't even been the industry that long ;) At
              > least I was editing gains in .txt files "back in the day".
              >
              > 2.
              >
              > We did use state-based mix snapshots and morphs, particularly for the
              > FA2 dynamic music crossfades.
              >
              > But!
              >
              > I was much more intrigued by the less rigid, more combinatorial mixes
              > that emerged by focusing on giving each individual bus its own logic.
              > I.e., different faders and mix groups responded to different things
              > logically, but somewhat independently, so there weren't all
              > necessarily forced to identify and "snap" to a global context. With
              > care and sensitibity, this type of system moves away from a paradigm
              > where the ideal contexts sound great but may rarely occur, and
              > awkward morphs are tolerated, towards a more fluid, always-
              > appropriate, ever-evolving mix.
              >
              > Cheers,
              > -A
              >
              > http://www.silvershard.com
              >
              >


              --
              Use the source
            • Andrew Thomas Clark
              ... Yup I agree. I just personally prefer having specific rules attached to specific buses, rather than specific mixes attached to specific global contexts.
              Message 6 of 6 , Oct 18, 2008
                --- In gameaudiopro@yahoogroups.com, Andy Farnell <padawan12@...>
                wrote:
                >
                > That truly interactive real-time worlds will always present
                > you with unforseen scenarios, so you will always need some
                > measure of a rule-based approach, and try to define that
                > as exhaustively as possible.

                Yup I agree. I just personally prefer having specific rules attached
                to specific buses, rather than specific mixes attached to specific
                global contexts.

                Also cool was dynamically assigning mix groups at runtime...

                i.e. AI weapons sounds were assigned to different groups depending on
                whether or not they were currently targetting the player.

                Another cool thing I forgot to mention about our real-time editor <->
                console bridge...

                3.

                We were able to do vid cap and run-time param cap at the same time.

                3.a. actual case:

                i. Recorded a couple minutes of gameplay, and recorded (via the PC
                editor) what the adaptive music system was doing with the various
                stems' volumes (as MIDI volume controllers).

                ii. In Logic, sync'd up the MIDI param cap with the video clip,
                mapped the MIDI volume controllers to logic buses, and assigned the
                appropriate stems to the appropriate buses.

                This let composers audition and/or compose their adaptive stems to-
                "gameplay" (albeit one iteration). They could move around their music
                content relative to the action to check various parts of the score
                worked. Also, they could tweak the timing and shape of the MIDI
                curves and provide precise feedback to the coders about what fades
                worked best.

                3.b. hypothetical case:

                I always wished our QA team were dumping A/V while testing... then
                they could have attached reference vid of audio bugs. (Trying to
                decypher text descriptions of audio issues was as ridiculous as it
                was frustrating.)

                I envisioned the next stage of this being dumping _all_ audio params
                along with the vid, in a format that could be played back and trouble-
                shot in our PC audio content manager / mixer tool.

                Cheers,
                -A

                http://www.silvershard.com
              Your message has been successfully submitted and would be delivered to recipients shortly.