Abstract |
Jessica Barness
Using AI to Expand the Voices of Design Debates
In my project, The Designers Respond, I explore the debates that happen within the reader comments of graphic design blogs. However, it's problematic to publish those comments in their original form. This challenge led me to fictionalize conversations that resemble the themes and voices found online, using a collaboration between my brain and AI (ChatGPT). In my presentation, I will discuss my approaches and reflections about training AI to generate historical debates analogous to those in the blogs.
Rebecca Catto
‘America's culture wars make losers of us all:’ Methodological Challenges in Content Analysis of Online National Newspaper Coverage of Gender, Sexuality, Science and Religion
In this short presentation, I reflect upon the methodological challenges of using databases for a content analysis of American national newspaper coverage of gender, sexuality, science and religion. These include technical issues of full-text access, export and import into the computer-aided qualitative data analysis software NVivo, which go hand in hand with issues of selection criteria and discernment. Decisions always need to be made about what to include and exclude in data collection. Working with digital data on areas of public controversy can expand the risks and possibilities.
Sean Petiya
A Linked Open Data Model for Comics Content
This presentation will review progress on a pilot study exploring the semantic enrichment of comics content (pages, panels, etc.) in an effort to better connect it to related, linked open data resources with a focus on graphic medicine (comics and graphic novels about healthcare). One of the challenges in describing these stories, but also a virtue of the genre, is capturing the various perspectives present in the story: patient, caregiver and provider. The primary goal of this project and underlying data model is to provide the structure for meaningfully and effectively describing this content from multiple perspectives, enhancing its metadata description, discoverability and potential to be remixed and reused in other applications.
Wesley Raabe
Using Digital Tools to Edit a 200,000-Word Text in Five Versions: What to Do?
I propose nine distinct steps in the process: 1) Transcribe each version of the text twice. 2) Make each pair of transcriptions match, with file-merging software. 3) Regularize the transcriptions with Regular Expressions, to remove chaff that could gum up collation, according to a rational editorial policy, in draft form. 4) Convert the regularized transcriptions to XML. 5) Align and number each sentence in the XML file, with Python and RegEx expressions. 6) Collate the separate XML files with Python and CollateX. 7) Encode each variant, visible in CollateX output, into RELEDMAC encoding for the LaTeX package. 8) Reconsider each decision made previously, by rediscovering source document conventions, which raise concerns about the draft editorial policy (from step 3)—if regularization has misled. 9) Repeat step 8 until what the edition policy says about treatment of textual variation—and what the edition does—are in near-perfect accord, within the limits of human patience. The purpose of my presentation is to illustrate—very briefly—each step in the above process, which has been ongoing for almost 17 years.
|