Loading ...
Sorry, an error occurred while loading the content.

366Re: Neural network use?

Expand Messages
  • Connectionist
    Mar 4, 2000
    • 0 Attachment
      This comes in late, but I really never have
      time<br>to check this site in the last 2
      months.<br><br>Between 93 and 96 I was working on Neural
      networks<br>trying to explore their usability for the
      semantic<br>and pragmatic (contextual) processing of
      natural<br>language written texts (newspaper articles, <br>scientific
      papers, various domains and topics).<br>I wanted to build
      a generic system for the <br>automatic generation
      of summaries for these texts.<br>I designed a whole
      architecture for the system<br>involving the parallel use of
      symbolic and <br>connectionist processors for the various
      levels<br>of processing and implemented the
      central<br>component, the pragmatic neural network, which<br>decides
      which sentences in the original text<br>are more
      important than the others and should be<br>used / reused
      for the construction of the final<br>summary or
      abstract. This NN was using a dozen<br>features that refer
      to the whole sentence and not<br>just individual
      words in it, things like contrast<br>(if there is an
      explicit or implicit contrast<br>between words in the
      sentence or even between this<br>and a previous sentence),
      or elaboration (if the<br>current sentence expands
      on something started in<br>the previous one). In
      order to check the<br>feasibility for the automatic
      encoding of these<br>high-level features I wrote a kind of
      manual and<br>gave it to 4 people who encoded the same
      texts.<br>There was a high degree of agreement among them<br>both
      regarding the individual features (is there<br>a contrast
      here or not?) and regarding whether<br>in the end a
      specific sentence is important or<br>less important. The
      former was about 60% on <br>average, the latter about
      90%. Then they went<br>on and encoded different texts
      on their own, and<br>the resulting feature vectors
      were used to train<br>the pragmatic NN. The
      performance of the NN on<br>unseen texts / sentences was
      reminiscent of the<br>degree of agreement among the human
      encoders: the<br>NN decisions and the human decisions
      coincided<br>60% on average. That was my PhD (Manchester,
      UK).<br><br>Then I came to Germany for a Post-Doc on
      spoken<br>dialogue systems and I haven't done anything more<br>on my
      hybrid system since then. And there is a lot<br>to be
      done.<br><br>I've got some Conference papers and the PhD
      itself<br>providing more details.<br><br>Till the next time (whenever
      that is)<br><br>Connectionist
    • Show all 1398 messages in this topic