Mattern THATCampTheory

This past weekend I led a workshop at THATCamp Theory, at Rutgers, on evaluating and critiquing multimodal projects.I must admit, my talk was kind-of a mash-up of two older projects: my CUNY DHI talk from last October (video here) and this post. Above are my slides, and below are my notes.

I unfortunately was able to attend only Day One of the two-day conference (Rory said Day Two was quite the brain-bender). Yet I thoroughly enjoyed the sessions I was able to take part in!

EVALUATION / CRITIQUE OF DH PROJECTS

This workshop will focus on developing a critical vocabulary for responding to DH and systems for providing meaningful evaluative feedback, including 1) developing critical evaluative criteria for various formats of multimodal work and 2) identifying theoretical frameworks that inform those criteria. We’ll consider both professional and student projects and spend some time considering how to make project evaluation an integral part of the DH classroom. Depending on the interests of the group, our case studies might include data visualizations, map-based projects, crowdsourced archival projects, and other interactive publications.

  • Recognize that there’s a history of considering “multimodal evaluation” in composition

[SLIDE 2] I’m not fully ensconced in the DH community – sympathetic to their interest in different forms, practices, praxes, of scholarship.

  • Craft as a useful model for considering how similar intellectual values and practices span domains – reading, writing, making in various modalities
  • But not all making is scholarship

[SLIDES 3-4] McPherson article: Multimodal Humanist – this term, still a mouthful, resonated more with me

[SLIDE 5] Scrivener on when production is research

[SLIDE 6] Question about Feedback & Evaluation — not simply so I could assign a grade, but so we could provide meaningful feedback

  • Work – particularly technical skills – were sometimes outside my area of expertise
  • How to balance weighting of form and content – “rigor” in concept or execution?
  • Individual vs. Group Accountability

[SLIDE 7] Revisited the list of criteria two years later

[SLIDES 8-10] Fall 2010 / 2011 / 2012 : Urban Media Archaeology

  • [SLIDE 11] Semester Schedule – discuss theories representing each unit
  • [SLIDE 12] PROJECT PROPOSALS – not different from trendy “contracts”
    • Justify choice of “genre” and format – use of media tools as method
  • [SLIDES 13-14] Student Proposed Projects
    • Carrier pigeons, electrification of lower Manhattan, video game arcades, newspaper company headquarters, “media actors” in Atlantic Yards using actor-network theory, etc.
    • I provide individual feedback; students post to blogs and classmates comment
    • This semester’s class hasn’t yet posted their proposals online
  • [SLIDE 15] Learn Data Modeling (interface now looks a bit different)
  • [SLIDE 16] User Scenarios
  • [SLIDE 17] Look inside Black Box – Software Development
  • [SLIDE 18] Pecha Kucha
    • DH projects inherently collaborative – need experts from multiple fields
  • [SLIDE 19] All the while, we’re collectively developing criteria for evaluation:
    • [SLIDE 20] By working in small groups and as a class to evaluate other “multimodal projects” + Hypercities
    • [SLIDE 21] Through individual map critiques
    • Thru Peer Review of one another’s projects
  • [SLIDE 22] Process Blogs – Self-Evaluation
    • Make public their process
      • [SLIDE 23] Discuss work w/ other public/cultural institutions – e.g., archives
    • [SLIDES 24-26] Practice “critical self-consciousness” – about their work processes, choice of methods, media formats, etc.
    • Hold themselves accountable for their choices
  • [SLIDE 27] Peer Evaluation: Paper Prototypes
  • Final Presentation: [SLIDE 28] My Feedback + [SLIDE 29] Students’ Peer Reviews

[SLIDE 30] Where was theory throughout?

  • Underlying the entire project, informing their understanding of the way cities work, informing their understanding of how maps work as media, informing how they design their data models, which are in shaped by how they want their projects to look for users – thus, theories about the visualization of data mix in with their theories about how databases work
  • And in order for students to know how we were going to evaluate success, these theories had to be made an integral part of our development process

[SLIDE 31] Through critique, we’ll reverse-engineer student and professional projects and find the theory that informed it

  • [SLIDE 32] From my list of evaluative criteria – Concept + Content; Concept-/Content-Driven Design + Technique; Documentation and Transparent, Collaborative Development; Academic Integrity and Openness; Review and Critique – are backed by theories: theories central to the project’s content, theories of design, theories of knowledge production, theories of labor, etc.
  • [CLICK] But we’ll focus on the few dimensions that are overtly theoretical, and that we can potentially discern in a quick review, in the short time we have here
  • [SLIDE 33] Break up into groups and assess the Concept + Content and Concept-Content-Driven Design + Technique of a few sample DH project and reverse-engineer that theories that might’ve informed their creation

[SLIDE 34] Case Studies:

  • These are the cases we choose from in my UMA class.
  • Solicit ideas for classes of projects to critique (e.g., data visualizations, map-based projects, crowd-sourced archival project, interactive publications)
  • Solicit ideas for specific projects groups can collaborative assess

Examples:

 

 

 

Recommended Posts