Loading ...
Sorry, an error occurred while loading the content.

Re: [SACC-L] An invitation to discussion

Expand Messages
  • Mark Lewine
    Well, Brian, you answered my individual request for best practices of qualitatitve assessment in the social sciences by starting a major discussion of the
    Message 1 of 3 , Oct 9, 2006
    • 0 Attachment
      Well, Brian, you answered my individual request for 'best practices' of qualitatitve assessment in the social sciences by starting a major discussion of the issue on the list-serve.. I loved your essay on the issue, by the way. A very thoughtful and literate discussion, but I am afraid that your main point is correct. Many of us, including myself, want you to carry the load on this because we sense that it does not matter what we think, "they" will use "assessment" to hand our lives over totally to the accountants and the techies. My God, look at the way the issue was constructed by my colleague for our "faculty conversation" which prompted me to write you for help! (the following is the text of a meeting announcement asking faculty leaders to participate in a discussion of this assessment issue which I desperately sent to Brian, hoping he would save me with a qualitative method which would trump the "technological" assessment instrument of torture sure to come):

      Faculty Conversation-Friday, October 13, 2006

      at Gwinn Estate





      "Classroom Assessment Techniques Series"


      Please join us for our first Conversation in the Series focusing on technology-based assessment. The November conversation will highlight traditional forms of non-technology assessment, and the February conversation will focus on best practices of both modalities.



      Technology-Based Assessment

      Assessment has always been an important means of evaluating the effectiveness of learning, instructional methods, and whether instructional methods accomplish course objectives. The increasing use of technology in the real and virtual environments is concomitant with the rise of using technology-based assessments. This discussion will focus on the cost-benefit analysis of assessment practices in technology.



      for discussion at the conversation:



      ¬ What practices enhance and maintain the integrity of the assessment and the learning process?

      ¬ How can the academic community benefit from technology-based assessment while avoiding its pitfalls?

      ¬ How does technology-based assessment enhance student achievement?

      ¬ Describe successful projects where technology-based assessment has been implemented.


      facilitator

      Christie Okocha, Assistant Professor, English





      The Conversation begins promptly at 12:00 Noon





      faculty development program
      ----- Original Message -----
      From: Lynch, Brian M
      To: SACC-L@yahoogroups.com
      Sent: Monday, October 09, 2006 10:19 AM
      Subject: [SACC-L] An invitation to discussion



      Hello again,

      This is Brian Donohue-Lynch from a small community college in
      northeastern Connecticut (Quinebaug Valley Community College). Some
      have seen my posts about a variety of things, including those on
      "learning outcomes assessment," and it is this particular discussion
      that I would like re-animate. However, if it needs a place of its own, I
      would be glad to branch it off somehow, though I hope that it would be
      of general interest to this full group.

      Since I shared my presentation on this topic way back in Savannah, it
      has been interesting to hear back from people, including from some who
      may not have thought at that time that they would be dealing with the
      topic themselves back at their respective institutions. What I am
      looking for is a further conversation-- rather than debating the merits
      of such efforts, to be focused instead on a key point toward which I
      think anthropologists should have a particular contribution to make; it
      has to do with our discipline's fundamental concern for understanding
      pattern, process, system etc. in cultures and societies.

      It is my growing understanding, in fact, that there is a persistent
      dilemma in higher education around the whole challenge of doing
      meaningful "learning outcomes assessment." And the dilemma is not that
      we don't know how to "do assessment"; many among us have little problem
      knowing how to identify intended learning outcomes for our courses,
      establish standards for assessing these, creating multiple ways to do
      actual assessment etc. The dilemma, instead, is a function of the fact
      that we aren't looking for ways to do such things in an organized,
      systematic way, with the right tools and perspectives that will enable
      us to see beyond the accumulated artifacts of our numerous, often
      disparate efforts. Along the way, as well, there are "traps" that take
      efforts onto detours, which then tend to confirm for at least some
      people that this is all a futile effort.

      One such trap is to continue to approach "assessment" from any number of
      previous models--the languages and categories of which become their own
      rationales for confusion and failure. There is a value, for example in
      "rubrics," and "outcomes," and "abilities," etc. but dominant systems
      of assessment that have already been tried and abandoned, sometimes out
      of sheer exhaustion, continue to be called up by such terms, and their
      potential value is overshadowed by the "Oh God, NO! Not AGAIN!"
      syndrome.

      I have worked with a number of faculty at our own place who almost have
      to go into recovery from previous assessment experiences before they
      could ever hear the words again. I think of the detective in the Pink
      Panther films, who eventually develops a severe tic and an
      uncontrollable laugh at the mention of "Clouseau." Not only do some have
      a negative reaction to assessment, but some continue to think about it
      through cumbersome, confusing, contentious models from their past
      experiences.

      But, the larger problem I see, is one that calls for an "anthropological
      imagination." Imagine any number of situations in which a cultural
      anthropologist has talked with people and observed them in their
      everyday behavior and experience, and then has stepped back, to draw
      into focus the "big picture" that few if anyone IN the culture itself
      have consciously imagined. It is something of a fundamental insight of
      the discipline, in fact, that most people in carrying out everyday
      patterns of behavior don't do so with a comprehension of or attention to
      the complexities, structures, and systems of the "big picture" of their
      cultures. Anthropologists are the ones who are suppose to have begun to
      comprehend that there IS a "big picture" (a deep structure, a pattern of
      culture, a pattern of behavior and for behavior etc.) and have developed
      the methodologies for drawing these dimensions of system and structure
      and pattern into view.

      Part of the confusion that seems to be happening is that in doing
      learning assessment we are in effect trying to get at the
      "anthropological perspective" (the patterns, the systematically achieved
      learning outcomes); but the focus and practice of those doing the
      assessing are stuck at a level of a kind of Boasian
      partcularism-gathering countless artifacts and observations and
      counts-without any comprehension of how to draw meaning from the
      accumulations. Or to use another analogy from the discipline, we are
      like archeologists who accumulate piles, boxes and drawers of artifacts
      (learning artifacts?) but don't collect these in any systematic way
      that would then allow us to "read" them for their three-dimensional
      meaning. Or, finally, we are stuck in the emic phase, not knowing how to
      move to an etic phase of our research.

      The problem we are facing-the plateau at which so many efforts seem to
      get stalled-is not because people don't know how to identify important
      learning outcomes, or how to assess these accurately, or how to develop
      standards (rubrics) for such assessment, or how to link these to their
      course offerings and programs, but because we don't know how to do all
      of this in a clear, organized, and systematic way, with the right
      tools, to then be able to make sense out of what we have finally
      accumulated.

      Imagine if Malinowski had tried to do his research in the Trobriand
      Islands using an Outcomes Assessment committee! We might never have a
      grasp of the systematic/systemic nature of the Kula Ring. We might have
      some significant collections of artifacts, and pictures, and
      inventories, and numbers, and assessments of many individual exchanges,
      and evaluation of the variety and quality of bracelets and necklaces,
      and so on, but we'd still be wondering to ourselves, "Now what do we do
      with all this stuff!?"

      This is where I have seen a few (and only a few) emerging examples of
      tools and approaches that head in the right direction-of enabling people
      to move toward a deeper and richer reading of the "stuff" they have
      accumulated in the name of learning outcomes assessment. I have also
      seen quite a few efforts that claim to be headed in this direction, but
      that stop well short of any "big picture" analysis capabilities.
      Unfortunately, many who are under the pressure of pending accreditation
      visits or the growing demands of legislatures, turn to things that sound
      like they will answer their assessment needs, but that in fact continue
      to fall far short (like electronic portfolios, or "student engagement
      surveys,"or even standardized tests.) These in themselves are not
      useless or bad, just not enough to get us off the plateau.

      Our small pilot project, (that is being carried out in at least one
      other college in our system, and at probably 10 or 12 other colleges
      across the U.S.) is working with a system that, in itself doesn't do the
      assessment of student learning, but that instead gives us a tool for the
      gathering of mostly qualitative data on actually achieved student
      learning outcomes, that then gives us a way to read the "big picture" at
      our institutions. It is a system, by the way, that was developed and is
      now supported by a guy with a significant background in anthropology!

      I would love to talk further with anyone interested in this challenge.
      I think that many efforts leave people stuck in a place where they are
      not sure how to get off the plateau. It is often as if we are trying to
      create the "big picture" of assessment (the scale of system, and
      process, and pattern) from inside-out, by engaging everyone in the
      micro-routines of defining rubrics, and crafting interesting classroom
      assessment techniques, and gathering samples of "demonstrated student
      learning outcomes," and hoping in the process that all this effort will
      somehow result in coherent system, pattern, process.

      This kind of effort needs anthropologists, desperately!

      I look forward to further communication about this. I hope that at
      least a few on our list find this a useful discussion..

      Brian




      [Non-text portions of this message have been removed]
    Your message has been successfully submitted and would be delivered to recipients shortly.