Loading ...
Sorry, an error occurred while loading the content.

Adding external C code to Tekkotsu

Expand Messages
  • Ignacio Herrero Reder
    Hi. For my research interests, I need to perform a Hough transform over a flow of images, in order to detect the main lines at the image. At now I m receiving
    Message 1 of 7 , May 5, 2009
    • 0 Attachment
      Hi. For my research interests, I need to perform a Hough transform over
      a flow of images, in order to detect the main lines at the image. At now
      I'm receiving RAW images on the laptop, and processing them via openCV
      image library. Anyway, I have a bottleneck with the image Tx-Rx as
      sometimes I lose a bunch of frames (10 or more), even in low quality
      image. I'm going to try with segmented images (hope that RLE encoded
      images "weight" less and aliviates the wireless channels), but, if that
      fails I'm considering to incorporate a Hough-based line detection module
      into the Tekkotsu arquitecture onboard the AIBO. so wireless information
      could be only a few data describing lines found at every frame.
      Could be possible to compile a openCV program into the MIPS? If it is
      not possible I've found another algorithm from
      http://www.inf.ufrgs.br/~laffernandes/kht.html which is supposed has
      been programmed in plain C++. Could it be possible to add this algorithm
      to Tekkotsu? Should I do any special thing?
      Thanks in advance.

      --

      Ignacio Herrero Reder / Tl. +34-95.213.71.60
      Dpto. Tecnologia Electronica / Fax: +34-95.213.14.47
      E.T.S. Ing. Telecomunicacion / nhr@...
      Universidad de Malaga / http://www.dte.uma.es
      Campus Universitario de Teatinos
      29010 Malaga, Spain
    • Ethan Tira-Thompson
      There are a couple ways to do this. If it s only a subset of OpenCV, then you could pull out the necessary files and dump them in Tekkotsu. Assuming these
      Message 2 of 7 , May 5, 2009
      • 0 Attachment
        There are a couple ways to do this. If it's only a subset of OpenCV,
        then you could pull out the necessary files and dump them in
        Tekkotsu. Assuming these are pure C files, the trick is the Makefiles
        are set up only to look for .cc files and compile them as C++, so
        there may be some slight performance or portability issues with just
        renaming the extension to compile as C++, but usually this isn't too
        significant.

        Alternatively, you could "manually" compile either the necessary
        pieces or maybe all of OpenCV as a library, and just modify the
        project Makefile to link it in. This would be the easiest way to
        retain compilation as pure C. This is how libpng and libjpeg are
        handled, which you can see pre-built in the Tekkotsu/aperios/
        {bin,include,lib,share} directories. You could add OpenCV there and
        modify the LDFLAGS and PLATFORM_FLAGS to link against it, and/or
        modify USER_LIBS if you want to put it somewhere else. To do this
        approach, it's generally possible to set the GCC and LD environment
        variables to point at the OPER_R_SDK/bin compilers instead of the
        system's native compilers, and hopefully the OpenCV build scripts will
        respect that.

        As for the MIPS issue, hopefully it won't be a problem. I don't have
        direct experience, from what I've heard OpenCV has performance
        specializations for x86, but it has generic implementations to be
        portable to other architectures.

        So having said all that, my main concern is actually whether the Aibo
        has enough processing power to give you a better frame rate than
        transferring the images... Hough transforms are usually pretty
        intensive. You might be better off using compressed JPEG images over
        a UDP connection instead.

        -Ethan
      • Ignacio Herrero Reder
        Thanks for your advices, Ethan. I will try to add either openCV or proprietary plain C++ Hough algorithms to Tekkotsu chain and, if successful, I will telll
        Message 3 of 7 , May 6, 2009
        • 0 Attachment
          Thanks for your advices, Ethan. I will try to add either openCV or proprietary plain C++ Hough algorithms to Tekkotsu chain and, if successful, I will telll you if they are fast enough....

          With respect to sending images to laptop and compute Hough transform there, I think I'm using UDP and JPEG compressed images now, as I'm using your RawCamBehavior class, and I have jpeg compression selected at the tekkotsu .xml configuration file. Are there other tricks to lower the amount of Tx wireless info? Perhaps lower compress_quality? En coding grayscale instead of color? (I don't need colours as Hough algorithm works with grayscale images, i think) Choosing another jpeg_dct_method instead of "fast"?

          Perhaps I will try with RLE segmented images, as I need only recognise simple lines, as field limits, a beacon, simple obstacles, and so on...and RLE frames should be smaller than jpeg or raw, as there will be just 3-4 or 5 regions with the same color in the image.
            Thanks again and regards.
                Ignacio

          Ignacio Herrero Reder            / Tl. +34-95.213.71.60
          Dpto. Tecnologia Electronica     / Fax: +34-95.213.14.47 
          E.T.S. Ing. Telecomunicacion     / nhr@... 
          Universidad de Malaga            / http://www.dte.uma.es
          Campus Universitario de Teatinos 
          29010 Malaga, Spain  


          Ethan Tira-Thompson escribió:

          There are a couple ways to do this. If it's only a subset of OpenCV,
          then you could pull out the necessary files and dump them in
          Tekkotsu. Assuming these are pure C files, the trick is the Makefiles
          are set up only to look for .cc files and compile them as C++, so
          there may be some slight performance or portability issues with just
          renaming the extension to compile as C++, but usually this isn't too
          significant.

          Alternatively, you could "manually" compile either the necessary
          pieces or maybe all of OpenCV as a library, and just modify the
          project Makefile to link it in. This would be the easiest way to
          retain compilation as pure C. This is how libpng and libjpeg are
          handled, which you can see pre-built in the Tekkotsu/aperios/
          {bin,include, lib,share} directories. You could add OpenCV there and
          modify the LDFLAGS and PLATFORM_FLAGS to link against it, and/or
          modify USER_LIBS if you want to put it somewhere else. To do this
          approach, it's generally possible to set the GCC and LD environment
          variables to point at the OPER_R_SDK/bin compilers instead of the
          system's native compilers, and hopefully the OpenCV build scripts will
          respect that.

          As for the MIPS issue, hopefully it won't be a problem. I don't have
          direct experience, from what I've heard OpenCV has performance
          specializations for x86, but it has generic implementations to be
          portable to other architectures.

          So having said all that, my main concern is actually whether the Aibo
          has enough processing power to give you a better frame rate than
          transferring the images... Hough transforms are usually pretty
          intensive. You might be better off using compressed JPEG images over
          a UDP connection instead.

          -Ethan

        • Jacek Malec
          Regarding the Hough transform on AIBO, and some experiences with it, you may wish to have a look at the report of our student, Peter Mörck (and his code as
          Message 4 of 7 , May 6, 2009
          • 0 Attachment
            Regarding the Hough transform on AIBO, and some experiences with it,
            you may wish to have a look at the report of our student, Peter Mörck
            (and his code as well), Line Detection for Self-Localization of a SONY
            AIBO Robot, available at:

            http://ai.cs.lth.se/education/finished_examination_projects/

            Please look for 2007 reports.

            best regards,
            jacek

            On 6 maj 2009, at 12.07, Ignacio Herrero Reder wrote:

            > Thanks for your advices, Ethan. I will try to add either openCV or
            > proprietary plain C++ Hough algorithms to Tekkotsu chain and, if
            > successful, I will telll you if they are fast enough....
            >
            > With respect to sending images to laptop and compute Hough transform
            > there, I think I'm using UDP and JPEG compressed images now, as I'm
            > using your RawCamBehavior class, and I have jpeg compression
            > selected at the tekkotsu .xml configuration file. Are there other
            > tricks to lower the amount of Tx wireless info? Perhaps lower
            > compress_quality? En coding grayscale instead of color? (I don't
            > need colours as Hough algorithm works with grayscale images, i
            > think) Choosing another jpeg_dct_method instead of "fast"?
            >
            > Perhaps I will try with RLE segmented images, as I need only
            > recognise simple lines, as field limits, a beacon, simple obstacles,
            > and so on...and RLE frames should be smaller than jpeg or raw, as
            > there will be just 3-4 or 5 regions with the same color in the image.
            > Thanks again and regards.
            > Ignacio
            >
            > Ignacio Herrero Reder / Tl. +34-95.213.71.60
            > Dpto. Tecnologia Electronica / Fax: +34-95.213.14.47
            > E.T.S. Ing. Telecomunicacion / nhr@...
            > Universidad de Malaga / http://www.dte.uma.es
            > Campus Universitario de Teatinos
            > 29010 Malaga, Spain
            >
            >
            > Ethan Tira-Thompson escribió:
            >>
            >> There are a couple ways to do this. If it's only a subset of OpenCV,
            >> then you could pull out the necessary files and dump them in
            >> Tekkotsu. Assuming these are pure C files, the trick is the Makefiles
            >> are set up only to look for .cc files and compile them as C++, so
            >> there may be some slight performance or portability issues with just
            >> renaming the extension to compile as C++, but usually this isn't too
            >> significant.
            >>
            >> Alternatively, you could "manually" compile either the necessary
            >> pieces or maybe all of OpenCV as a library, and just modify the
            >> project Makefile to link it in. This would be the easiest way to
            >> retain compilation as pure C. This is how libpng and libjpeg are
            >> handled, which you can see pre-built in the Tekkotsu/aperios/
            >> {bin,include,lib,share} directories. You could add OpenCV there and
            >> modify the LDFLAGS and PLATFORM_FLAGS to link against it, and/or
            >> modify USER_LIBS if you want to put it somewhere else. To do this
            >> approach, it's generally possible to set the GCC and LD environment
            >> variables to point at the OPER_R_SDK/bin compilers instead of the
            >> system's native compilers, and hopefully the OpenCV build scripts
            >> will
            >> respect that.
            >>
            >> As for the MIPS issue, hopefully it won't be a problem. I don't have
            >> direct experience, from what I've heard OpenCV has performance
            >> specializations for x86, but it has generic implementations to be
            >> portable to other architectures.
            >>
            >> So having said all that, my main concern is actually whether the Aibo
            >> has enough processing power to give you a better frame rate than
            >> transferring the images... Hough transforms are usually pretty
            >> intensive. You might be better off using compressed JPEG images over
            >> a UDP connection instead.
            >>
            >> -Ethan
            >>
          • Ignacio Herrero Reder
            Thanks a lot, Jacek! Best Regards Ignacio Ignacio Herrero Reder / Tl. +34-95.213.71.60 Dpto. Tecnologia Electronica / Fax: +34-95.213.14.47
            Message 5 of 7 , May 6, 2009
            • 0 Attachment
              Thanks a lot, Jacek!  Best Regards
                     Ignacio
              Ignacio Herrero Reder            / Tl. +34-95.213.71.60
              Dpto. Tecnologia Electronica     / Fax: +34-95.213.14.47 
              E.T.S. Ing. Telecomunicacion     / nhr@... 
              Universidad de Malaga            / http://www.dte.uma.es
              Campus Universitario de Teatinos 
              29010 Malaga, Spain  


              Jacek Malec escribió:

              Regarding the Hough transform on AIBO, and some experiences with it,
              you may wish to have a look at the report of our student, Peter Mörck
              (and his code as well), Line Detection for Self-Localization of a SONY
              AIBO Robot, available at:

              http://ai.cs. lth.se/education /finished_ examination_ projects/

              Please look for 2007 reports.

              best regards,
              jacek

              On 6 maj 2009, at 12.07, Ignacio Herrero Reder wrote:

              > Thanks for your advices, Ethan. I will try to add either openCV or
              > proprietary plain C++ Hough algorithms to Tekkotsu chain and, if
              > successful, I will telll you if they are fast enough....
              >
              > With respect to sending images to laptop and compute Hough transform
              > there, I think I'm using UDP and JPEG compressed images now, as I'm
              > using your RawCamBehavior class, and I have jpeg compression
              > selected at the tekkotsu .xml configuration file. Are there other
              > tricks to lower the amount of Tx wireless info? Perhaps lower
              > compress_quality? En coding grayscale instead of color? (I don't
              > need colours as Hough algorithm works with grayscale images, i
              > think) Choosing another jpeg_dct_method instead of "fast"?
              >
              > Perhaps I will try with RLE segmented images, as I need only
              > recognise simple lines, as field limits, a beacon, simple obstacles,
              > and so on...and RLE frames should be smaller than jpeg or raw, as
              > there will be just 3-4 or 5 regions with the same color in the image.
              > Thanks again and regards.
              > Ignacio
              >
              > Ignacio Herrero Reder / Tl. +34-95.213.71. 60
              > Dpto. Tecnologia Electronica / Fax: +34-95.213.14. 47
              > E.T.S. Ing. Telecomunicacion / nhr@.... es
              > Universidad de Malaga / http://www.dte. uma.es
              > Campus Universitario de Teatinos
              > 29010 Malaga, Spain
              >
              >
              > Ethan Tira-Thompson escribió:
              >>
              >> There are a couple ways to do this. If it's only a subset of OpenCV,
              >> then you could pull out the necessary files and dump them in
              >> Tekkotsu. Assuming these are pure C files, the trick is the Makefiles
              >> are set up only to look for .cc files and compile them as C++, so
              >> there may be some slight performance or portability issues with just
              >> renaming the extension to compile as C++, but usually this isn't too
              >> significant.
              >>
              >> Alternatively, you could "manually" compile either the necessary
              >> pieces or maybe all of OpenCV as a library, and just modify the
              >> project Makefile to link it in. This would be the easiest way to
              >> retain compilation as pure C. This is how libpng and libjpeg are
              >> handled, which you can see pre-built in the Tekkotsu/aperios/
              >> {bin,include, lib,share} directories. You could add OpenCV there and
              >> modify the LDFLAGS and PLATFORM_FLAGS to link against it, and/or
              >> modify USER_LIBS if you want to put it somewhere else. To do this
              >> approach, it's generally possible to set the GCC and LD environment
              >> variables to point at the OPER_R_SDK/bin compilers instead of the
              >> system's native compilers, and hopefully the OpenCV build scripts
              >> will
              >> respect that.
              >>
              >> As for the MIPS issue, hopefully it won't be a problem. I don't have
              >> direct experience, from what I've heard OpenCV has performance
              >> specializations for x86, but it has generic implementations to be
              >> portable to other architectures.
              >>
              >> So having said all that, my main concern is actually whether the Aibo
              >> has enough processing power to give you a better frame rate than
              >> transferring the images... Hough transforms are usually pretty
              >> intensive. You might be better off using compressed JPEG images over
              >> a UDP connection instead.
              >>
              >> -Ethan
              >>

            • Ethan Tira-Thompson
              ... Like you suggest, both lowering compress_quality and using grayscale would help. I don t think jpeg_dct_method will significantly affect size, just
              Message 6 of 7 , May 6, 2009
              • 0 Attachment
                > With respect to sending images to laptop and compute Hough transform
                > there, I think I'm using UDP and JPEG compressed images now, as I'm
                > using your RawCamBehavior class, and I have jpeg compression
                > selected at the tekkotsu .xml configuration file. Are there other
                > tricks to lower the amount of Tx wireless info?

                Like you suggest, both lowering compress_quality and using grayscale
                would help. I don't think jpeg_dct_method will significantly affect
                size, just computation speed of the compression.

                Usually I'm able to stream full framerate video using the default
                settings, so you may be running into network congestion or a poor
                signal in your area. So a hardware solution would be to get a
                dedicated access point.

                Another important detail I should have mentioned before is that for
                best bandwidth, it's important not to use wireless on both the sending
                and receiving sides (assuming both are connected to the same access
                point). Even with a dedicated AP, the problem is the data has to
                bounce through the AP, so there are two transmissions: one from the
                robot to the AP, and then from the AP to your computer. That's
                cutting bandwidth in half, but what's worse in my experience is the
                retransmission from AP to computer often collides with the next image
                being sent from the robot, so you a significant number of dropped
                frames.

                > Perhaps I will try with RLE segmented images, as I need only
                > recognise simple lines, as field limits, a beacon, simple obstacles,
                > and so on...and RLE frames should be smaller than jpeg or raw, as
                > there will be just 3-4 or 5 regions with the same color in the image.


                RLE is very small, so this might work for you. It may require some
                color calibration to segment the way you want. Noisy images can get
                to be larger than JPEG, but usually it's smaller.

                -Ethan
              • Ignacio Herrero Reder
                ... I ve tried with a dedicated AP, but frame rate was worse than with an Ad-hoc AIBO-laptop network, perhaps due to the reasons you explained above. I haven t
                Message 7 of 7 , May 6, 2009
                • 0 Attachment
                  Ethan Tira-Thompson escribió:

                  Usually I'm able to stream full framerate video using the default
                  settings, so you may be running into network congestion or a poor
                  signal in your area. So a hardware solution would be to get a
                  dedicated access point.

                  Another important detail I should have mentioned before is that for
                  best bandwidth, it's important not to use wireless on both the sending
                  and receiving sides (assuming both are connected to the same access
                  point). Even with a dedicated AP, the problem is the data has to
                  bounce through the AP, so there are two transmissions: one from the
                  robot to the AP, and then from the AP to your computer. That's
                  cutting bandwidth in half, but what's worse in my experience is the
                  retransmission from AP to computer often collides with the next image
                  being sent from the robot, so you a significant number of dropped
                  frames.



                  I've tried with a dedicated AP, but frame rate was worse than with an Ad-hoc AIBO-laptop network, perhaps due to the reasons you explained above. I haven't tried with the laptop wire-connected to the AP, I think I'll give a try.....

                  Thanks again for you help.
                      Ignacio
                  Ignacio Herrero Reder            / Tl. +34-95.213.71.60
                  Dpto. Tecnologia Electronica     / Fax: +34-95.213.14.47 
                  E.T.S. Ing. Telecomunicacion     / nhr@... 
                  Universidad de Malaga            / http://www.dte.uma.es
                  Campus Universitario de Teatinos 
                  29010 Malaga, Spain  
                Your message has been successfully submitted and would be delivered to recipients shortly.