Encoding Music and Text

4e97eeb7 967b 4932 a903 a43a9d119e15 1

Event type: Workshop

 

To reserve a place on this workshop, please email Pip Willcox, pip.willcox@bodleian.ox.ac.uk

We are delighted to be joined by Raffaele Viglianti from Maryland Institute for Technology in the Humanities (MITH), adding his considerable expertise in the field to our local knowledge. If you're interested in music and the digital, this is not to be missed!

Creating digital editions of text and of music is well understood. Two established XML-based standards in common use are the Text Encoding Initiative (TEI) and the Music Encoding Initiative (MEI). Both TEI and MEI are used to signify the encoding standard and the governing community that creates and uses it.

The TEI was founded in 1987 and is a mature and still developing standard with a large and lively international community. The MEI, founded in 1999, has been inspired by the TEI, and is equally the focus of an international, growing community.

While the two encoding initiatives are not formally related, they share many common characteristics and development practices. A TEI-encoded text can be embedded in an MEI-encoded document and vice versa. There has been work in this field already, including through the TEI Music Special Interest Group, and a repository of associated files is available online.

This workshop will explore how well this is working, and where there are spaces for improvement in documentation or in coding. Discussions will focus around case studies and will be generalizable.

OBJECTIVES

This workshop will bring together colleagues with a range of subject and disciplinary interests both to understand the scope of the current knowledge of the music- and text-encoding landscapes and to articulate fields for potential development. Our discussions will grow from questions such as:

§  What are our preferred methodologies and tools for encoding music and text in one document?
§  Is greater interoperability desirable?
§  What features of MEI and TEI do not currently interoperate happily?
§  How do we deal with competing hierarchies?
§  What tools and technologies are currently in use to work with, interrogate, and present music- and text-encoded documents?
§  How can we move seamlessly between music and text views, depending on a particular encoder’s or reader’s interests?
§  How might these tools be developed further to improve our understanding and facilitate our use of these documents?
§  What new tools do we need?

 

This workshop is organized by:

§  Xavier Bach, Queen’s College, University of Oxford
§  James Cummings, IT Services, University of Oxford
§  Andrew Hankinson, Faculty of Music/Oxford e-Research Centre, University of Oxford
§  Raffaele Viglianti, Maryland Institute for Technology in the Humanities
§  Pip Willcox, Bodleian Libraries/Oxford e-Research Centre, University of Oxford

 

Event Link: http://blogs.bodleian.ox.ac.uk/digital/2016/12/20/digital-methods-encoding-music...