I work for Sibelius so I'm heavily involved in this world. MusicXML is a great standard and offered a solid basis for data interchange between music notation programs. But now there's a new group working to build a successor standard, MNX: https://w3c.github.io/mnx/docs/
It was originally going to be in XML but they recently switched to JSON, which is a good move, I think. I can't wait for it to be adopted as it will give so much more richness to the data set.
I've written two music apps that use MusicXML as their native representation (https://woodshed.in is the newer one), so I've been involved in this world as well.
MusicXML is a great effort to tackle a very difficult problem, but some of the details can get rather hairy (e.g. having to represent many concepts twice, once for the visual aspect, and once for the performance aspect; or how exactly to express incomplete slurs). Interoperability in practice seems to be fairly limited (Possibly because for many music programs, MusicXML import/export is an afterthought).
One of the biggest contributions a new standard could make is to provide as complete a test suite as possible of various musical concepts (and their corner cases!) and their canonical representation. It looks like MNX has already made good efforts in this direction.
MusicXML is old hat. All the cool kids are using MusicJSON now.
EDIT: I'd like to clarify that I posted this comment, as a joke, before the below comment went on to clarify that there was, in fact, a JSON-based rewrite of the music standard in progress:
I have not had much success using MusicXML to switch between different notation programs. Trying to read a score exported from Musescore as MusicXML in Sibelius or vice versa feels worse than switching between Microsoft Office and other ostensibly compatible formats.
Music notation is incredibly complex, and there are many places things can go wrong. There's a wide spectrum of error situations, such as:
* The exporting application "thinks" about notation in a different way than the importing application (i.e., it has a different mental model).
* MusicXML provides multiple ways of encoding the same musical concept, and some applications don't take the effort to check for all possible scenarios.
* Some applications support a certain type of notation while others don't.
* MusicXML doesn't have a semantic way of encoding certain musical concepts (leading applications to encode them as simple text (via the words element), if at all.
* Good ol' fashioned bugs in MusicXML import or export. (Music notation is complex, so it's easy to introduce bugs!)
I recently used a funny workflow involving MusicXML. I wanted to learn a song that I only had sheet music for and not being much of a sightsinger, I had manually input the sheet music into Vocaloid so I could sing along with it (OCR exists but in my experience is in such a sorry state and requires so many manual fix ups that for the moment it's easier to type it in manually. As for enterring the data I have experimented and I'm significantly faster and more accurate with a piano roll than typing note names in musescore).
Now as this song had nonsense lyrics and many repetitions and almost-repetitions, the structure of the song didn't quite pop out to me, so what I did was export a midi from vocaloid that I opened musescore. From musescore I then exported it as MusicXML. I opened that in Notepad++ for the sole purpose of pretty printing the xml to normalize the texual representation and saved it right back. I took that and opened it in a jupyter notebook where I scraped it for <measure> elements with regular expressions and then I searched for repeating ones, that I assembled into repeating segments and sub-segments.
This helped me memorize the song.
What I liked about MusicXML was that it was self-documenting enough that I didn't need to reference documentation and I could find candidates for normalization quite easy (for instance I didn't care about directions of stems or inferred dynamics).
A gotcha is that Musescore 4 has a bug where it doesn't show the "midi import" where you can adjust the duration quantization, this didn't matter to me for this song, but I did bite me once in the past when opening a midi from Vocaloid. Musescore 3 works. Without playing around with that there can be an issue where it infers 16th notes as staccato 8th notes and similar.
Anyone remembering IEEE 1599? Seems to share a lot of goals.
And there are actually a lot of alternatives, e.g. ABC notation, Alda, Music Macro Language, LilyPond, to name a few. Difficult to decide which one to prefer.
MusicXML seems to be more for notation and sheet music typesetting rather than algorithmic operations on the notes themselves. Sure you could train a model on it but you'd be better off doing it on the specific domain and classically translating up to the XML format.
Tokkemon|2 years ago
It was originally going to be in XML but they recently switched to JSON, which is a good move, I think. I can't wait for it to be adopted as it will give so much more richness to the data set.
TedDoesntTalk|2 years ago
adrianh|2 years ago
microtherion|2 years ago
MusicXML is a great effort to tackle a very difficult problem, but some of the details can get rather hairy (e.g. having to represent many concepts twice, once for the visual aspect, and once for the performance aspect; or how exactly to express incomplete slurs). Interoperability in practice seems to be fairly limited (Possibly because for many music programs, MusicXML import/export is an afterthought).
One of the biggest contributions a new standard could make is to provide as complete a test suite as possible of various musical concepts (and their corner cases!) and their canonical representation. It looks like MNX has already made good efforts in this direction.
Exoristos|2 years ago
Considering this data is machine-generated and machine-ingested, moving away from XML seems like a big step down.
account-5|2 years ago
[0] https://w3c.github.io/mnx/docs/comparisons/musicxml/
gnulinux|2 years ago
https://music-encoding.org/about/
This is what MuseScore 4 will soon start using.
unknown|2 years ago
[deleted]
Kye|2 years ago
Related: Can it handle non-Western notations?
odyssey7|2 years ago
marsven_422|2 years ago
[deleted]
AdmiralAsshat|2 years ago
EDIT: I'd like to clarify that I posted this comment, as a joke, before the below comment went on to clarify that there was, in fact, a JSON-based rewrite of the music standard in progress:
https://news.ycombinator.com/item?id=38460827
Never change, tech world!
geodel|2 years ago
OfSanguineFire|2 years ago
unknown|2 years ago
[deleted]
unknown|2 years ago
[deleted]
jonathrg|2 years ago
Does anyone have any success stories?
adrianh|2 years ago
Music notation is incredibly complex, and there are many places things can go wrong. There's a wide spectrum of error situations, such as:
* The exporting application "thinks" about notation in a different way than the importing application (i.e., it has a different mental model).
* MusicXML provides multiple ways of encoding the same musical concept, and some applications don't take the effort to check for all possible scenarios.
* Some applications support a certain type of notation while others don't.
* MusicXML doesn't have a semantic way of encoding certain musical concepts (leading applications to encode them as simple text (via the words element), if at all.
* Good ol' fashioned bugs in MusicXML import or export. (Music notation is complex, so it's easy to introduce bugs!)
im3w1l|2 years ago
Now as this song had nonsense lyrics and many repetitions and almost-repetitions, the structure of the song didn't quite pop out to me, so what I did was export a midi from vocaloid that I opened musescore. From musescore I then exported it as MusicXML. I opened that in Notepad++ for the sole purpose of pretty printing the xml to normalize the texual representation and saved it right back. I took that and opened it in a jupyter notebook where I scraped it for <measure> elements with regular expressions and then I searched for repeating ones, that I assembled into repeating segments and sub-segments.
This helped me memorize the song.
What I liked about MusicXML was that it was self-documenting enough that I didn't need to reference documentation and I could find candidates for normalization quite easy (for instance I didn't care about directions of stems or inferred dynamics).
A gotcha is that Musescore 4 has a bug where it doesn't show the "midi import" where you can adjust the duration quantization, this didn't matter to me for this song, but I did bite me once in the past when opening a midi from Vocaloid. Musescore 3 works. Without playing around with that there can be an issue where it infers 16th notes as staccato 8th notes and similar.
Rochus|2 years ago
And there are actually a lot of alternatives, e.g. ABC notation, Alda, Music Macro Language, LilyPond, to name a few. Difficult to decide which one to prefer.
rooster117|2 years ago
1-6|2 years ago
treyd|2 years ago
recursive|2 years ago
DonBarredora|2 years ago
bloatfish|2 years ago
[deleted]