Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Date: Monday, 20/May/2024
9:00am - 10:30amAudiAnnotate workshop
Trent J. Wintermeier
Samantha J. Turner
10:30am - 10:45amShort break
10:45am - 12:15pmMEI Basic workshop part 1
Maristella Feustle
 
ID: 110 / WS2/1: 1
Workshop
Keywords: pedagogy, basics, beginners

MEI Basics

M. Feustle

University of North Texas, United States of America

This workshop will proceed similarly to that of the previous three years, and is intended to give absolute beginners extended explanations of and practice with XML basics and rudimentary MEI encoding. The content will include, but not be limited to that of the MEI online tutorials, in order to maximize comprehension and retention of information through repetition and the gradual addition of new information in successive exercises. This workshop will allow for participation by those who are interested in MEI, but cannot travel to Denton at this time, and it offers both a smoother transition for users into the content of other conference workshops, as well as a recruiting tool for prospective users who don’t know where to begin. The length of the workshop is flexible, but can be as much as a half day. This presentation can be given in an in-person or hybrid format.

 
12:15pm - 1:00pmLunch
1:00pm - 3:00pmMEI Basic part 2
3:00pm - 3:15pmShort break
3:15pm - 4:00pmTour "Soundbox" music makerspace
4:10pm - 5:00pmMusic Special Collection showcase
6:00pm - 10:00pmOpening Reception
Date: Tuesday, 21/May/2024
9:00am - 9:15amWelcome
Session Chair: Maristella Feustle, University of North Texas
9:15am - 10:30amOpening Keynote: Dr. Imani Mosley
Session Chair: Anna E Kijas, Tufts University
Dr. Imani Mosley
10:30am - 10:45amShort break
10:45am - 11:45amPaper Session #1
Session Chair: Johannes Kepper, Paderborn University
 
ID: 101 / PS1: 1
Long Paper
Keywords: early music, Notre Dame, musical reuse, machine learning, corpora

The “Clausula Archive of the Notre Dame Repertory”: End-to-end OMR, encoding, and analysis of medieval polyphony

J. Stutter

University of Sheffield, United Kingdom

Within the repertory of thirteenth-century polyphonic music (commonly known as “Notre Dame polyphony”), one of the key issues that most eludes contemporary musicological study is that of musical reuse. This is most frequently viewed through the process of “clausula substitution”, i.e. where sections of polyphony (clausulae) are replaced by alternatives, and clausulae are troped with new texts to form motets. Parallel to this, however, is a complex and largely unstudied network of more subtle interrelationships between settings of polyphony. This network extends beyond simple verbatim borrowing and raises difficult questions around what it means for music to be “similar” or “different”. In addition, the ambiguous and divergent notation present within the extant manuscript sources resists typical editorial practices that are contingent on chronology and an exact distinction between “composer” and “performer”. Instead of being rhythmically fully specified, Notre Dame notation often transmits only mild indications of latent rhythm and voice alignments, which differ wildly between sources. As such, scholarly opinion is undecided on even such fundamental issues as rhythm and alignment.

This paper presents and demonstrates the features of the “Clausula Archive of the Notre Dame Repertory” (CANDR), an online and open source database for thirteenth-century polyphony, augmented by an optical music recognition (OMR) and editing tool. Alongside this is a Python analysis programming toolkit specifically designed to study the problem of musical reuse in medieval polyphony within the limits of its ambiguous notation. CANDR provides a single graphical web interface to browse, search, and edit the sources of Notre Dame polyphony (in both facsimile and symbolic notation) by overlaying notational traces from OMR directly onto facsimile images. The database currently contains full manuscript tracings and symbolic representations of nearly 1,000 settings of polyphony. This symbolic notation can be directly exported from CANDR by way of a novel MEI customisation format that encodes the music as it lies in the manuscript sources rather than attempting rhythmic transcription and, importantly, respects the ambiguity and flexibility of the pre-mensural medieval notation. Finally, this paper presents first results from the novel Big Data corpus analysis which highlights possible connections between settings that were previously thought to be unrelated.



ID: 100 / PS1: 2
Short Paper
Keywords: early music, Aquitanian neumes, Square notation, encoding, search and analysis

Encoding and Analysis of Early Music: Aquitanian and Square Music Scripts

E. De Luca, M. E. Thomae, V. Urones Sánchez

NOVA University of Lisbon, Portugal

In this paper, we will present an anonymous project focused on the automatic analysis of plainchant found in some Portuguese manuscripts from the 12th to the 17th centuries. The chants to be analyzed are written in two music scripts: neumatic Aquitanian and square notation. This project aims to create a new experimental tool, a prototype interface that compares the same chant, or portion of chant, across the whole repertory of music sources from the selected geographical area. One of the main challenges of this project is to encode and retrieve musical information originally written in two styles of notation. While the musical repertory remained quite stable across the centuries, Aquitanian neumatic notation provided a limited degree of information related to the pitch. On the other hand, the later square notation was much more exact in terms of pitch precision but lost information on specific nuances of vocal delivery (conveyed by the various forms of earlier neumes). Because we are dealing with some old sources, the ones from the 12th – 14th centuries, many are in poor material condition (with wrinkles, holes, humidity, dirtiness, etc.) and a fragmentary state. Fragments from medieval codices of plainchant usually bear little music and display a wide variety of graphical features (decoration, scripts, size of the fonts, etc.). These two aspects make these sources difficult to recognize by optical music recognition (OMR) tools. We will present the encoding process followed to get a sample of 100 chants encoded to be used for analysis in our prototype interface.



ID: 112 / PS1: 3
Short Paper
Keywords: hymnody, music sources, data models, incipits, digital indices

Indexing Hymnody: Comparative Analysis and Opportunities for Interoperability

J. P. Karlsberg

Emory University, United States of America

This paper explores possibilities for greater interoperability among digital hymnody indices and between hymnody indices and digital libraries of musical sources. These findings stem from a comparative analysis of data models for leading hymnody indices and a higher-level comparison of hymnody indices and related resources and collections of digitized musical sources. I undertook this research in developing a plan to index the contents of the forthcoming Sounding Spirit Digital Library (SSDL), a thematic digital collection of 1,300 vernacular sacred music books from the southeastern United States published between 1850 and 1925. This analysis examines the three most extensive projects indexing hymnody in North America, the Hymn Tune Index (HTI), Southern and Western American Sacred Music and Influential Sources (SWASMIS), and Hymnary.org; the Répertoire International des Sources Musicales (RISM); the digital thematic collections American Vernacular Music Manuscripts (AVMM) and SSDL; and large scale aggregators of digitized works like HathiTrust and the Internet Archive (IA). The analysis illustrates a common interest across hymnody indexing projects in documenting hymn tunes, hymn texts, people responsible for these creations, and the sources in which hymns appear and documents the overlapping yet distinct approaches to encoding musical information and other data these projects record pertaining to these categories. The analysis also documents the lack of connectivity among hymnody indices and the uneven approaches toward connecting indexed hymnody to available digitized facsimiles of corresponding musical sources. This project includes a crosswalk detailing metadata fields shared among extant hymnody indices and a set of recommendations for bringing data from these indices and related resources together to facilitate discovery and research.

 
11:45am - 1:00pmLunch
1:00pm - 2:30pmPaper Session #2
Session Chair: Joshua Neumann, Akademie der Wissenschaften und der Literatur Mainz
 
ID: 108 / PS2: 1
Long Paper
Keywords: MEI-Basic, MuseScore, music notation editor, data interchange

MEI-Basic support in MuseScore 4.2

L. Pugin1, J. Hentschel2, I. Rammos2, A. Hankinson1, M. Rohrmeier2

1RISM Digital Center, Switzerland; 2Digital and Cognitive Musicology Lab, EPFL, Switzerland

The implementation of MEI support in MuseScore 4.2 marks a significant milestone in bridging the gap between the Music Encoding Initiative (MEI) and a widely-used music notation software application. While existing MEI creation tools exist, direct export solutions from widely-used notation software, such as MuseScore, have been lacking. This paper details the collaborative effort between the Digital and Cognitive Musicology Lab (DCML) at EPFL and the RISM Digital Center to integrate MEI support into MuseScore.

The implementation focuses on round-trip lossless conversion, enabling users to export and import MEI files seamlessly, facilitating data reuse and versatile use-case scenarios. It is based on MEI-Basic, a subset of MEI specifications, which ensured a well-defined starting point and coverage of the most essential features of Common Western music notation. While some limitations exist due to application design and differences in data structures, the implementation offers a foundation for further improvements.

However, challenges persist, such as metadata limitations and handling of IDs, which generate further tasks for future refinement. The collaborative effort acknowledges the ongoing role of the MEI community in providing feedback and insights to shape recommended workflows and best practices. The experiences gained by users employing MuseScore with MEI support will contribute to the continuous improvement and adjustment of this integration, fostering a more robust MEI ecosystem.



ID: 102 / PS2: 2
Short Paper
Keywords: MEI, parse, search, elastic, index

Simplifying MEI for Searching Applications

J. I. Stevenson, D. Day

Brigham Young University, United States of America

Utilizing the ElasticSearch library and the Java Language, researchers are efficiently and robustly parsing MEI in a fresh and flexible way, vastly improving on the previous working solution presented at the 2023 MEC by David Day from Brigham Young University. Decreasing indexing time, providing infinitely more options for searching, improving accuracy, and eliminating the need for constant re-indexing are some of the impressive improvements made. Our act of reducing MEI to a skeletal frame for rapid and variable searching has exciting implications for the community at large, and will become a crucial touch stone for those working on similar projects in the future.



ID: 114 / PS2: 3
Short Paper
Keywords: FRBR, MEI, Digital Liszt Catalogue Raisonné, work source relations

Issues of FRBR systematics in indexing sources that represent different versions of works. The example of Franz Liszt

M. Richter1, B. Voigt2

1SLUB Dresden; 2University of Heidelberg

The DFG project Digital Liszt Catalogue Raisonné, which was launched in 2020, catalogues and systematizes Franz Liszt's oeuvre. To this end, philological and bibliographic information on Liszt-related sources and works will be collected and encoded and stored in a suitable data format as the basis for the index. MEI, an XML-based format that is human- and machine-readable and equally suitable for long-term archiving and data export via API, will be used for this purpose. The catalogue raisonné to be developed will provide a selection of possible views of this data in a web interface. When indexing the source holdings, problems arise in the FRBR categorization, which forms the basis of MEI. The poster presents a problematic source in this respect and presents a preliminary coding in MEI for discussion.

The Liszt-related source situation is characterized by the fact that, in addition to finished versions, early or transitory states of the works are also well documented by a rich stock of sources. However, Liszt tended to use the same source materials for multiple revisions of his works. There are often years or decades between the individual revisions. In such cases, the various stages of editing the sources each form their own versions of a work, and occasionally a new work results from the editing of a source. For this purpose, Alexander Wilhelm Gottschalg's copy of a passage newly composed by Liszt is firmly attached to the print. A later, also published version was created by making further deletions and additions, deleting Gottschalg's copy and adding autograph leaves. The state of preservation described above makes intuitive recording according to the FRBR model difficult, firstly due to the assignment of the source object to different versions. A second difficulty results from the materially complex composition of the source consisting of printed and manuscript leaves.



ID: 111 / PS2: 4
Short Paper
Keywords: digital music edition, data model, software development, template research

Works of other authors and composers as templates – approaches to capture template research in digital music editions. A case study in the context of the Reger-Werkausgabe.

N. Beer1, A. Nguyen2

1University of Paderborn, Germany; 2Max-Reger-Institute, Karlsruhe/Germany

The hybrid music edition project Reger-Werkausgabe (RWA) has developed new research and publication methods since its beginning in 2008. With its online platform “RWA Online” it has constituted a sustainable foundation for complex digital research work like the research field of “templates” that Reger used in different ways for his compositions. RWA concentrates on templates from literature und music in Reger’s Organ Works (module I), Songs and Chorale Works (module II) and Reger’s Editions of Works by other composers (module III).

Initially, RWA’s work about templates was designed to fit into the condensed spaces of a printed volume’s critical commentary. The unlimited space in the project’s digital publication part soon led to new presentation ideas that could serve to answer research questions like text comparisons or to allow for more comprehensive research about text origins. RWA ahs is compiled a thoroughly worked out data set that makes different levels of relations between Reger’s works and their template works accessible.

RWA had to solve several (technical) problems. Beside digitalization of template source material, a data model had to be. In addition, software solutions that make these contents accessible and usable from varying research interests or entry points had to be drafted, evaluated, and developed.

For its data model RWA facilitates standards like TEI, MEI and FRBR to document characteristics and relations. This data is than brought to live with the help of an eXist-db and four eXist-db web applications developed within the RWA that provide data access, data, text comparison as well as object visualization.

This paper will give an overview of the data model and software developments in the RWA by presenting key problems and respective solutions form its edition work. There will be furthermore insights into extensions that evolve from the work on the third project module.

 
3:00pm - 5:30pmWorkshop: Metadata Editing in MEI
Kristina Richts-Matthaei (Academy of Sciences and Literature, Mainz)
Matthias Richter (Dresden State and University Library, Dresden)
Severin Kolb (Dresden State and University Library, Dresden)
Clemens Gubsch (Austrian Academy of Sciences, Vienna)
Date: Wednesday, 22/May/2024
9:15am - 10:45amPaper Session #3
Session Chair: Jesse P. Karlsberg, Emory University
 
ID: 105 / PS3: 1
Long Paper
Keywords: MEI, documentation, customization, extension

Beyond the Standard Model

J. Kepper, L. Rosendahl, R. Sänger

Beethovens Werkstatt, Germany

Undoubtedly, MEI is one of the most, if not the most versatile data model for encoding music notation, covering an increasing number of notational systems for various epochs and regions, and different scholarly interests and research questions on such music. In chapter 1.3.5 of the MEI Guidelines, it is recommended that “in production, it is best to use a customized version of MEI, restricted to the very needs of a project.” However, there is no documentation so far on how to properly enrich the MEI model in those cases where the current standard does not adequately cover a given research interest.

Obviously, the first step would be to reach out to the MEI Community and to ensure that there are no conceptually adequate, but insufficiently apparent solutions available. Such cases are not unlikely if a solution would depend on features of MEI that have not been widely explored. However, it is possible and legitimate for a research project to have specific needs that are beyond the scope of the current MEI framework. A reason for that might be the exploration of a repertoire or feature currently unsupported by MEI. In that case, it is probably best to continue the discussion with the MEI Community and jointly seek to find a model that neatly fits into the existing MEI framework. Such an enhancement may require changes to current MEI, so this isn’t necessarily an addition only, and may result in breaking changes to other parts of the MEI framework. Eventually, project requirements may contradict existing design choices – they are intentionally not covered, or addressed in ways which do not cover the needs of the example at hand. Reasons for that might be complications in other parts of MEI caused by the implementation of this model, or potential inconsistencies between these parts. In such cases, there is a need to find a data model which extends beyond MEI’s current scope without interfering with the standard MEI model.

Our presentation seeks to shed light on this critical gap in the documentation of MEI. We will discuss best practices for individual extensions to the standard, aiming to emphasize the inherent challenges faced by researchers and projects encountering modelling challenges that are not currently addressed by the MEI framework. Addressing this documentation gap is crucial for fostering innovation and accommodating diverse scholarly interests within the music encoding community.



ID: 103 / PS3: 2
Short Paper
Keywords: music encoding, data structures, Bach, counterpoint, Goldberg Variations

Modelling Multi-domain Voicing in the Goldberg Variations

S. J Monnier, J. K Tauber

None, United States of America

Efforts to encode music have long recognized the differences between how note information can be conceived of logically, how the notes are to be performed, and how they are to be conveyed in standard music notation. Rich encoding schemes such as MEI allow description in each of these domains—the logical, gestural, and visual—although not always elegantly in the same file. Contrapuntal keyboard music is a challenging genre for multi-domain encoding, particularly with regard to voice. There is not always consistent mapping from staff and stem direction to contrapuntal voice nor do staves necessarily indicate which hand is playing, especially if there is hand-crossing. Furthermore, occasional chordal textures can defy the assignment of individual notes to a particular voice.

Bach’s Goldberg Variations provide a rich inventory of examples of all these challenges. This paper will illustrate those challenges and potential solutions in the context of an early-stage project to build an interactive viewer / player for the Goldberg Variations that allows exploration of the work through different domain lenses. The specific contrapuntal structures of these variations allows for a detailed exploration of multi-domain encoding. We abstract away from the conventions of music notation to support a more detailed exploration of the analytical domain which can be rendered in a variety of different ways. As well as the encoding and web application itself, the paper will also discuss Python code used to convert and map between the different domains of encoding.

The power of working across the logical, gestural, visual, and analytical domains stretches what we are able to do but also brings up fascinating issues for how to encode music in a machine-readable and human-readable way to maximize how much information can be presented and analyzed.



ID: 109 / PS3: 3
Long Paper
Keywords: data modelling, performance, audio-visual media, metadata, MEI

Modelling Performance – Conceptual Realities vs. Practical Limitations in MEI

J. Neumann, K. Richts-Matthaei

Academy of Sciences and Literature | Mainz, Germany

MEI’s development to date has understandably focused on the digitization of visually-based musical documents. It thus has provided a parallel to historical musicology’s long-standing (since the 19th century) conception of musical scores as musical works. The past quarter-century, in particular, has seen an expansion of musicological research to include performance studies based on the use of audio-visual recordings as primary source documents. Not least in the context of current considerations to expand the coding options of the format to include audiovisual elements, the limits of this current structuring become apparent. The question therefore arises whether the existing MEI structures are sufficient to respond to this performance oriented turn, or if they should be questioned and adapted as necessary in order to better align the data format with current developments and considerations in musicological research and neighbouring disciplines.

 
10:45am - 11:00amShort break
11:00am - 12:00pmPerformance and Media IG Meeting
Session Chair: Joshua Neumann, Akademie der Wissenschaften und der Literatur Mainz
12:00pm - 1:00pmLunch
1:00pm - 2:30pmPaper Session #4
Session Chair: Maristella Feustle, University of North Texas
 
ID: 107 / PS4: 1
Long Paper
Keywords: music encoding, data flow network, relational theory

Critical Semantic Properties of Music Notation Datasets

M. Lepper1, B. Trancón y Widemann1,2

1semantics gGmbH, Germany; 2Hochschule Brandenburg

The semantics of notation systems can naturally be meta-modelled

as a network of transformations, starting with the syntactic elements

of the notation and ending with the parameters of an execution. In

this context, a digital encoding format for music notation can be seen

as selecting a subset of the data nodes of this network for storage,

leaving others to evaluation. For such a selection, semantic properties

are defined which have impact on the practical costs of maintenance,

migration, extension, etc.



ID: 113 / PS4: 2
Short Paper
Keywords: encoding, gagaku, Japanese Traditional Music

GagakuXML:Markup for Hichiriki Scores and Enhancing Efficiency in Traditional Japanese Music Encoding

S. Seki

The University of Toyko, Japan

This paper examines methods for encoding musical scores used for the Hichiriki instrument in gagaku, traditional Japanese court music. Gagaku is Japan's oldest musical tradition but its scores use ambiguous notation, posing challenges for computational analysis. The proposed encoding method centers on shōga (sung mnemonic lyrics) which appear most frequently. The elements of tetsuke (fingering) and hyōshi (rhythmic cycles) accompany shōga in markup. Efficient workflow utilizes Python for initial markup and a Stream Deck device for rapid tagging while reviewing images of original scores. Encoded data created for 93 short and medium pieces provides a structured foundation to computationally study gagaku. Future work will expand encoding to parts for other wind instruments like shō and ryūteki.



ID: 104 / PS4: 3
Short Paper
Keywords: Wikidata, musical instruments, language-agnostic, machine-readable framework, collaborative knowledge base

Musical Instrument Encoding Matters

K. E. Bouressa, I. Fujinaga

CIRMMT, McGill University, Canada

This paper explores Wikidata’s transformative potential as a language-agnostic tool for categorizing musical instruments. Traditional taxonomy systems like Hornbostel-Sachs (HBS) or the Library of Congress (LOC) face challenges due to language-specific constraints and a lack of provisions for local terminology. Wikidata, a collaborative knowledge base, serves as a solution, allowing structured data storage and linking across diverse topics. It employs a hierarchical structure with unique identifiers (Q-ids and P-ids) for items and properties, organizing them within a conceptual framework.

Despite challenges arising from Wikidata’s open-source nature, its machine-readable format and multilingual support provide a robust solution. It addresses inconsistencies, such as incomplete musical instrument holdings, by offering a more inclusive musical knowledge ecosystem. Users can explore, contribute, and query information in their preferred languages, fostering a diverse landscape.

Wikidata also acts as a tool for archiving existing databases, notably used by museums like the Metropolitan Museum of Art and the British Museum. Overcoming linguistic isolation, it links unique items to identifiers, facilitating queries for historical instruments. This is especially crucial for identifying linguistically isolated historical instruments documented in encyclopedias or articles.

Contrasting with traditional hierarchical systems, Wikidata’s item-specific approach offers greater flexibility and access. The paper explores Wikidata’s collaborative, multilingual, and machine-readable framework, highlighting its role in projects like LinkedMusic’s Virtual Instrument Museum (VIM), which utilizes Wikidata to create a comprehensive virtual database of musical instrument names, expanding hierarchies and terminology. The synergy between Wikidata and VIM enriches musical knowledge dissemination, engaging specialists and nonspecialists alike.

In conclusion, the paper underscores Wikidata’s potential impact on musical instrument encoding, promoting linguistic diversity through machine-readable, language-agnostic identifiers. Collaborative projects, exemplified by VIM, demonstrate Wikidata's ability to bridge language gaps, contributing to a globally enriched understanding of diverse musical instruments.



ID: 106 / PS4: 4
Short Paper
Keywords: Jazz notation, OMR, encoding standardization, MIR.

Towards a standardization of lead sheet encoding: an experience in OMR.

P. García-Iasci1,2, J. C. Martínez Sevilla1, D. Rizo1,3, J. Calvo-Zaragoza1

1University of Salamanca, Spain; 2University of Alicante, Spain; 3ISEA.CV

Music encoding is a extensive field of research and therefore it needs a unification of encoding rules according to the music it deals with.

The proposal advocates the use of Optical Musica Reconigtion for the automatic encoding of jazz lead sheets. For them it will be necessary to explore the musicological complexities of jazz, identifying challenges associated with the improvisational nature, handwritten notations and notational ambiguities, harmonic complexities. To adapt these musical and notational challenges to OMR technology based on the creation of a diverse dataset composed of handwritten sources to enrich the information to train Artificial Intelligence (AI)-based Music Information Retrieval models.

The main goal is to achieve an efficient automatic encoding, contributing to the preservation and dissemination of jazz musical heritage.

 
2:30pm - 2:45pmShort break
2:45pm - 4:00pmClosing Keynote: Dr. Michael S. A. Cuthbert "Encode Fast, Decode Often"
Session Chair: Joshua Neumann, Akademie der Wissenschaften und der Literatur Mainz
Dr. Michael S. A. Cuthbert
Date: Thursday, 23/May/2024
9:15am - 12:00pmCommunity Meeting
Session Chair: Anna E Kijas, Tufts University
1:00pm - 3:00pmMeetings
IG meetings: Digital Pedagogy/MerMEId/Tutorials

 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: MEC 2024
Conference Software: ConfTool Pro 2.6.151
© 2001–2024 by Dr. H. Weinreich, Hamburg, Germany