This series of virtual one-hour webinars are designed to cover selected introductory topics in digital scholarship. The programs will provide attendees with professional development skills to conduct research in the digital age and supply introductory training to areas that may not be covered in regular instruction and curricula. Topics include basics on digitization, text mining, copyright and data visualization. These workshops will also include time for discussion and questions. All are welcome to attend!
Browse the Spring 2024 Digital Scholarship Series Collections
Data Visualization
Are you a university student, faculty or staff member who is eager to harness the transformative potential of data visualization in your research endeavors? This session will introduce participants to how data visualization can elevate your research to new heights. Explore a rich landscape of tools and services tailored to the academic community and ensure that you have the right resources at your fingertips. Whether you're a student embarking on a research project, a faculty member seeking to engage students or a staff member looking to improve your data-driven decision-making, this workshop is designed to address your specific needs.
Research Metrics: Uses and Limitations
In this session, you will learn the meanings of different types of research metrics: journal-level (such as the Journal Impact Factor and the Scimago Journal Rank), author-level (such as the h-index) and article-level (such as citation counts) metrics. Altmetrics, alternatives to traditional citation-based metrics, will also be discussed. Additionally, you will also learn about the limitations of research metrics and how to use them responsibly.
Rights, Resharing and Your Research: Navigating the World of Intellectual Property
Participants will learn about basic U.S. intellectual property laws, including copyright, patents and trademarks, and how they apply to your research, data, creations and inventions. Issues and options surrounding sharing of research results and data sets will be discussed, including the status of the 2022 OSTP memo on free, immediate and equitable access to federally funded research.
Digital Scholarship Showcase
Jessica Barness
Using AI to Expand the Voices of Design Debates
In my project, The Designers Respond, I explore the debates that happen within the reader comments of graphic design blogs. However, it's problematic to publish those comments in their original form. This challenge led me to fictionalize conversations that resemble the themes and voices found online, using a collaboration between my brain and AI (ChatGPT). In my presentation, I will discuss my approaches and reflections about training AI to generate historical debates analogous to those in the blogs.Rebecca Catto
‘America's culture wars make losers of us all:’ Methodological Challenges in Content Analysis of Online National Newspaper Coverage of Gender, Sexuality, Science and Religion
In this short presentation, I reflect upon the methodological challenges of using databases for a content analysis of American national newspaper coverage of gender, sexuality, science and religion. These include technical issues of full-text access, export and import into the computer-aided qualitative data analysis software NVivo, which go hand in hand with issues of selection criteria and discernment. Decisions always need to be made about what to include and exclude in data collection. Working with digital data on areas of public controversy can expand the risks and possibilities.Sean Petiya
A Linked Open Data Model for Comics Content
This presentation will review progress on a pilot study exploring the semantic enrichment of comics content (pages, panels, etc.) in an effort to better connect it to related, linked open data resources with a focus on graphic medicine (comics and graphic novels about healthcare). One of the challenges in describing these stories, but also a virtue of the genre, is capturing the various perspectives present in the story: patient, caregiver and provider. The primary goal of this project and underlying data model is to provide the structure for meaningfully and effectively describing this content from multiple perspectives, enhancing its metadata description, discoverability and potential to be remixed and reused in other applications.Wesley Raabe
Using Digital Tools to Edit a 200,000-Word Text in Five Versions: What to Do?
I propose nine distinct steps in the process: 1) Transcribe each version of the text twice. 2) Make each pair of transcriptions match, with file-merging software. 3) Regularize the transcriptions with Regular Expressions, to remove chaff that could gum up collation, according to a rational editorial policy, in draft form. 4) Convert the regularized transcriptions to XML. 5) Align and number each sentence in the XML file, with Python and RegEx expressions. 6) Collate the separate XML files with Python and CollateX. 7) Encode each variant, visible in CollateX output, into RELEDMAC encoding for the LaTeX package. 8) Reconsider each decision made previously, by rediscovering source document conventions, which raise concerns about the draft editorial policy (from step 3)—if regularization has misled. 9) Repeat step 8 until what the edition policy says about treatment of textual variation—and what the edition does—are in near-perfect accord, within the limits of human patience. The purpose of my presentation is to illustrate—very briefly—each step in the above process, which has been ongoing for almost 17 years.