This is a public draft for a forthcoming, somewhat sprawling post on the Bela.io blog, and has been cut down for clarity. The technical context means that there’s a smattering of jargon, and it’ll sometimes drift into a love letter to Bela. Please get in touch if you have any thoughts. Apologies for current lack of image descriptions.
When the Bela team asked me to write a guest blog post about what I’m doing with the platform, I knew that it wasn’t going to be straightforward, particularly because I believe there’s a message behind most of this work that will get lost behind the fancy bleeps and flashing lights. So I’d like to cut straight to the chase straight away.
The essence of this post is as follows: embedded computing is bringing adaptability to instrument development in ways that seemed like wild science fiction only a few years ago. But we need to work harder to open access up. Maintaining an open source approach is one way of going about this. The means to adapt and extend sit at the heart of access.
I use Bela on a daily basis. It’s a combination of a few pieces of hardware and software, building upon open source projects, that come together to create a small embeddable computer. It’s designed to handle sound while minimising any audible delay between interaction and output. It’s relatively affordable (but I should mention that similar platforms exist at a fraction of the price – more on those in a future expansion of this post).
I like this situation partly because it helps me move from making controllers, toward making what I might more readily call instruments. Self-contained objects that don’t always require an expensive laptop sitting at the side doing all the hard work. This helps create a context that de-emphasises screens and helps explore more tangible interfaces.
Are screens and laptops still relevant? Absolutely, as toxic as I might find them myself in situations that demand social interactions (that’s another blog post, perhaps)! But these new instruments afford different kinds of interaction. Less things to plug in (which can mean less barriers on setting something up independently). Options.
So, technology like this rapidly becoming a key ingredient in making what we could call accessible music technology. It’s commonplace to use “AMT” as shorthand for this (the A is kind of interchangeable between accessible, assistive, or adapted)..but I find myself increasingly questioning that label. We could argue that all music technology should be accessible, but there will always be exceptions — and this also downplays a very important point: what’s still lacking in accessible music technology is the direct involvement of the people it claims to engage. In my opinion, if we approach Access first, rather than compromising with an abstract notion of accessibility, the conversations and inclusion we seek are more likely to take place.
Working within well-intentioned frameworks like Universal Design or Design Thinking, designers often lose their way by making assumptions about accessibility rather than engaging in direct dialogue or action. This can lead to the development of what Liz Jackson has recently identified as “Disability Dongles”: a well intended, elegant, yet useless solution to a problematic em we never knew we had”. For example, gloves that translate sign language into English, or expensive wheelchairs that can negotiate stairs (I’ve held back on my opinions on equivalents in our scene, at least for now..it’s complex). These are items that place responsibility onto the individual to adapt, rather than addressing issues with the environment or culture. Jackson suggests another approach, which she calls Design Questioning. Please feel free to abandon this post and go and listen to her speak about it herself.
The “AMT” scene is, of course, prone to the same problems. Digital instruments bring an immediate and highly adaptable approach to sound, but while the gap between musician and coder is narrowing in some fields, it’s still a fairly exclusive scene due to narrow points of access being represented. Accessibility in this context is too often taken to mean the inclusion of a touch screen, large buttons, or a proximity sensor.
In many cases, instruments are produced for therapeutic purposes or designed to be operated by people taking roles of facilitators or carers rather than affording direct control, and these constraints are accepted as the norm. And if a device can make a pleasant noise with minimal effort, well, that’s often regarded as a sufficient marker of success. Is this good enough? Well, to be honest, maybe sometimes it can be..because that very direct removal of a barrier can be a way of bypassing cultural gatekeepers and focussing on the communication and power that noise-making gives us. But as an act of compromise, does this really represent the creative potential and voice of Disabled musicians? What of the expression found in years of practicing an instrument? This stuff is hard, for a reason..access is not the same as easiness. Of course, these are questions to be answered by the musicians themselves.
Meanwhile, Disabled people continue to innovate and adapt by hacking individual solutions, which end up sitting outside the central flow of development.
Organisations like Drake Music, led by Disabled artists, are beginning to address this situation with statements like “by and with rather than for” at the recent DMLab instrument hacking community relaunch. This echoes the “nothing about us without us” slogan adopted by the Disability rights movement in the 90s, and refocuses activities on the social model of Disability.
So what about this new wave of tools? By providing opportunities for rapid prototyping that doesn’t require constant attachment to computers, I hope that technology along the lines of the Bela platform can help pave the way for more musical instruments designed and created by Disabled people. That is to say, “Disabled artist-led music technology” rather than just “accessible music technology”.
It’s a shift that I’m starting to feel in my own practice, as the person usually identified as the non-Disabled partner in instrument building collaborations (although this is not always the case), and as someone seeking conversations around these issues of access and inclusion, beginning to engaging with Disabled arts and culture, in my more individual experiments.
Bela-based projects engaging with access
None of this happened overnight — I worked on many projects around access (some probably quite questionable) over the years before this, but it was contact with DM, and particularly the collaboration with John below, that really changed everything.
I’d rather hand over to John Kelly for an introduction to the Kellycaster (and thanks to John for tweaking some of this text):
This is a guitar. The concept of the Kellycaster came from John Kelly, and it came into being through collaborative design with essential support from Drake Music (and with a physical form developed by John Dickinson and Gawain Hewitt).
The Kellycaster is a stringed guitar that feeds into a digital pickup. Rather than using the traditional fretboard to shape notes and chords, the Kellycaster features a chord library that John selects and modifies for a given song into a song bank. Then these chords are selected live whilst playing using a MIDI keyboard or similar device. We use Bela to pick up the volume of the strings and map them to chords.
John: “we have come so far and the journey continues, our dream now is an instrument with a jack connection to plug straight into an amp and of course implementing the ‘Boogie Bar’. [n.b. for the layperson: this means we’re finally going to replace the biro we have sellotaped to the guitar body with a proper joystick].
We maintained the computer connection rather than moving to that direct audio output straight away (against what I considered my better judgement) because this was the best way to ensure access without my being involved all the time. Rather than optimising it prematurely by reducing everything to a self-contained box, we built the chord system upon the familiar Ableton Live interface, so John ended up gigging with it almost immediately, tailoring his own sound, and I would be able to respond to this in our development sessions. The interface was open to be remapped and hacked in real-time, and we could fight it out together to decide how the changes would make it into the code.
We’d made the initial prototype at a hackathon in a matter of hours, but the development process was not easy or quick by any stretch of the imagination. Trying to guess what was going to work for this instrument only got us so far: it took grit and tenacity on both sides to recognise that certain nuances were not coming through, and to find solutions in playing style or code without compromising the aesthetic.
By half way through the project, I made a point only to work on the code while we were together. It was almost certainly a textbook example of how not to do design in the conventional sense, but it got us to where we needed to be. These processes we worked through together have given us tools for future projects. And we’re still speaking to each other.
This project took place as part of the London Liberty Festival in 2017, and was a collaboration between Drake Music and Arts and Gardens. We had a few R&D meetings, led by a design group of Disabled artists including Drake Music associate musicians, in which some rather outlandish ideas came up. Incorporating Bela, Raspberry Pi, and Touch Boards meant that we could bring sample players and proximity sensors to an outdoor setting, entirely battery powered and weatherproofed.
Among the instruments were seed clocks that responded to breath, switches that could be walked and wheeled over to play samples, and trees with capacitive sensors that sang back when visitors hugged them. These were all applications for which we’d usually rely upon a computer or a hardware sampler, which instead fitted neatly into a Tupperware box.
These weren’t what I would necessarily describe as accessible instruments, but they presented a new set of options in this context. And they were a key element in an immersive experience through which the audience could engage in a piece of music, decide which area they preferred, and start dialog with musicians and guides on how to participate if the access options didn’t quite fit.
This experience informed a project at Belvue School as part of Youth Music’s Exchanging Notes programme, which we dubbed “music outside the classroom”:
Together with young musicians, we ended up dismantling the sampler from Planted Symphony, embedding it in a Lazy Susan, which became a permanent box for the school’s music classes.
And more.. we took devices outside as part of the ImPArt performing arts summit in Remscheid, Germany last year: https://www.facebook.com/ImPArt.eu/posts/420230005214033
Light Recorders are a set of small devices I created, which translate light and sound back and forth into vibrations. The project started as an attempt to bring the approach from Planted Symphony to my own work, to explore the boundaries of so-called accessible music technology, and to start exploring the “aesthetics of access”.
I quickly learnt that the latter part didn’t really work out without engaging with access and disability culture through dialog (turns out that making something a “multi-sensory experience” with some switches doesn’t really mean anything in abstract). And in a few people’s eyes, this piece was a failure for other reasons: it never really looked like a finished piece..instead, in the first exhibition, I ended up sitting in the corner with a soldering iron for the most part, interrupting myself to come and talk directly with visitors and figure out how it should work..spending a whole day trying to connect a cable from one side of the room to the other. I later realised that I was playing on my ADHD traits to explore a sort of personal performance art that I’ve been fervently developing ever since.
Once I could accept that this was an unfinished piece, everything opened up. So the name “Light Recorders” has come to represent a glorious explosion of stuff..a resolutely unfinished piece with space for dialogue with whoever cares to engage.
Bela fits in here as a way to create sounds in response to the lights within a matter of minutes (and most of that is often waiting for the device to load up). At a few pence each, and without the need for complex circuitry to get a result, photoresistors provide a great introduction to sensor-based instruments.
Instrument Maker is a sprawling project: essentially a library based on the visual programming environment Pure Data, designed to open up instrument development. This has expanded to include a set of AAC-style symbols and hardware (currently created to adapt..you guessed it..those wonderful Bela boards).
I started putting together the code for Instrument Maker with Gift Tshuma, a musician based in Montreal, as a way to develop some workshops soon to be launched with the Education Makers group at Milieux Institute. It builds upon existing repositories of code developed in a variety of contexts: developing for Drake Music and Wac Arts, teaching in universities, and those late night experimental sessions for their own sake.
We wanted to fill a gap we’d noticed in currently available resources. During an excited conversation about the potential for enhanced accessibility through “dataflow” programming, Gift challenged me to make a two minute demonstration video of the instrument making process. Of course, I failed — there was too much to explain just to make a few oscillators to play in a scale. But we went on to discuss how we could develop a set of abstractions to help find an entry point appropriate to that context.
Our aim is to find a situation where the spark of excitement, potential in that hackathon or workshop moment is led by the power being held by everyone involved. Something to challenge gatekeeping and the existing barriers around assumed knowledge as well as more practical barriers to code. Of course, not everyone seeks independence or aspires to be an audio programmer. In fact, it can be damaging and frustrating to present a false impression of independence. Rather, we want people to know that the possibilities are out there, and that these shifts in power can take place.
With time, we started to add options for screen reader access, “easy read” versions of the code, and alternatives to the typical mouse-and-keyboard interfaces upon which this kind of programming environment usually relies. The Hidden Sounds group at City Lit started requesting symbols, which we developed together, and eventually turned into a whole collection. We’re working on ways to integrate this with the code, as an option.
I’ve often wondered if, when we’re operating iPad apps and setting presets in a typical school workshop, we may as well be learning to code — before we even make a start, upon the appearance of a computer or circuit board, the feedback I’ve noted from people in both situations is usually quite similar, along the lines of: “I can’t do this, I’m not a technical person”. This is of course a perfectly valid concern, but I’d like to see how we could challenge this established role or division of who can and should access the inner workings of these instruments.
So the introductory process we’ve worked out for Instrument Maker is very much based on how we’d try to find a comfortable, unpatronising path into these situations, with the goal of making music placed up front: setting out the parameters to make music in an individualised manner, through a series of questions, either in person or through simulated dialog with a computer, which gradually unfolds into exploring what we might call traditional code if so desired.
Robyn Steward has been instrumental in pushing this forward; we used Instrument Maker to develop her wireless effects pedal (which Robyn has named Barry) as part of her Emergent commission for Drake Music. We recently collaborated on an iPad app development workshop using this code, and Robyn’s opening statement on our worksheet says it all: “it’s OK to make mistakes, making mistakes means you’re learning something”.
To accompany the code, we use a board that fits on top of the Bela as a sort of shield or “capelet”, with a typical “voltage divider” circuit built in: a simple construction to read the values of a sensor, but not exactly the easiest thing to get to grips with in a first session. Just like the code, this gives us a solid starting point for some practical outcomes straight away in a workshop or demonstration.
Throughout these various options, the point isn’t to simplify anything — although it certainly appears that way. Although that can be part of a solution, it’s no good if the ability to expand is closed off in the process. The real work will be in creating a way through the objects that help create a learning experience for those that want it, without getting in the way of those people who are confident to get stuck in. This isn’t the sort of thing that can be guessed or worked out through “empathy” – it’s only going to come about through conversation and direct involvement.
So that’s it for now..please let me know what you think of all this (I don’t keep a public email account, but you can reach me at @matthewscharles on Twitter).
I’ve ranted about this before (at the start of this journey), here.
This is a working document – original posted 11th April 2018 / updated 20th April 2018.
First, some context: over the last couple of weeks I’ve been thinking a lot about disability and the performing arts, as I was visiting Montreal on a personal mission to learn about the scene there and share practice. The tone of the visit was set by attending a Cinema Politica screening of Defiant Lives, a film about the history of the disability rights movement, which was followed by discussion with the local community, including activists. It was an eye opener as well to interact with the Critical Disability Studies Working Group, part of the Participatory Media Cluster at Milieux Institute, Concordia University. I found myself engaging with issues around disability and equality rights on an international level for the first time, considering related issues such as gender, and applying these ideas back to the work I do with both with Drake Music and as an independent sound artist/instrument maker. Running parallel to this, I ended up expanding my making skills, discussing accessible spaces, and starting new collaborations through the Education Makers jams.
I’m grateful to have had the opportunity to deliver a workshop on collaboration in accessible music tech development as a result of this dialog, where some of this started to come together (I’m grateful to Owen Chapman for setting this in motion). This was an opportunity to seek critical outsider perspectives on practice that is all too often framed within victory narratives. More of that in a future blog post, but there was one aspect that I came away unsatisfied with that I felt the need to write about here.
I had planned to make the social model of disability a foundation of the workshop and advertised it as such. Learning about this approach through my contact with Drake Music (and in particular a sustained collaboration with John Kelly on the Kellycaster) was a pivotal moment in my work as an educator and musician, and I’d go further to say it has completely changed my worldview. The social model is important because it represents the first significant moment that disabled people reclaimed disability and redefined it according to their own experiences (that’s not to say that every disabled person identifies or agrees with it). It’s about liberation, a move from the dominant imposed models and a move towards a rights-based approach.
In simplified form, the social model sits in opposition to the medical model, in which disability is conflated with impairment, something to be fixed, and responsibility falls upon the individual to fit in. Through the social model, impairments are recognised, but disability is considered the product of society’s failure to accommodate individuals’ rights to equal access; the barriers created by society in various forms.
I find it incredibly difficult to speak about the social model in general terms, whether I’m trying to provide an introduction or realise that I’m preaching to the converted. I confess I have a surface understanding of this area learned mostly through practice. In this case I did my usual cop-out: I played a video (the introduction provided by Scope is excellent, and was one of my own entry points, but doesn’t feel right for this kind of moment), and stumbled through some token references to aesthetics of access pioneered by Graeae.
The problem is that whenever I try to talk about these issues, I keep coming back to language – or rather, getting bogged down in it. Yes, discussing the social model raises important issues around person-first language (and indeed the importance of respecting self-identification on that front). And we can draw a lot of parallels between building automatic doors to benefit everyone and building access into technology. But that’s where I’ve tended to leave the discussion. Maybe it’s because these issues are less directly relevant to me, to my personal lived experience. Maybe it’s because they almost seem too obvious to explore at length, but I’ve come to realise that the translation between contexts isn’t always clear in the way that I assume it is.
My approach to this was partly inspired by something Ann-Louise Davidson (founder of the maker lab at Milieux) said in a recent conversation: that in her line of work, so much dialog across practices is really about translation. I realised that I was missing a more direct way of placing some of these ideas in context, in the language of music-making and instrument development as well as broader access. And that’s when something clicked.
So after some thought, here’s the missing slide, comparing the medical and social models in context. This is by no means exhaustive, and there’s a undoubtedly a bias towards instrumental/performance situations and a hearing, white, cisgender male person’s perspective. So it’s just a start, and requires much further discussion, but I think at least this illustrates some of the current issues in the context of music technology and what we might describe as adapted or accessible instruments.
A rough version of this slide, which I created for a music teacher training session at Trinity Laban, can be found at this link.
Many of these thoughts have come out of conversation with disabled people who are musicians, facilitators, makers, and artists, over the past couple of years (a grouping in which I would sometimes place myself, but I’ll return to that later). In particular I’d like to thank John Kelly and Gift Tshuma (a Montreal-based musician and activist), who I spoke to over Skype and in person respectively within a couple of hours of each other the day before the Milieux workshop. A common thread in these two conversations was the questioning of the frequent use of the term “inspirational”, which I guess we might think of as a remnant of the charity model that was prominent in the 20th century, and the subsequent “superhuman” narrative that developed around the 2012 Paralympics. Not to suggest that disabled people can’t be inspirational in their own right, just like anyone else (or indeed self-identify and own these labels), but these conversations highlighted that this situation is often a barrier to equality. As John put it, “if I inspire people, I hope it’s to challenge the crap that’s out there”.
It was John (alongside Gawain Hewitt, head of Drake Music’s R&D programme) who first introduced me to the social model in this context. I sent the slide to John to ask his opinion, and I’m grateful for a couple of tweaks to the terms I’ve used in the diagram, in particular a focus on rights and equality (any mistakes or misrepresentations remain my own). I also asked John what he thought about the inclusion of “medical” in quotes, since I found myself questioning its relevance in this context. John had this to say – with a disclaimer that this is based on personal experience and needs backing up with evidence:
For me the medical model in music is about …fitting in to assumptions of normality…seeing the purpose/role as being only therapeutic/rehabilitation/self expression…instruments played in one prescriptive way…examination process excluding/devaluing adjustment…a hobby/time filler.
Running through all of this is a question of where I place myself. In certain contexts, I identify as disabled. I’m not sure how constructive it is for me to identify as “partially disabled”, but the barriers I experience might be described as less visible or less commonly recognised than others. From a I received my first ADHD diagnosis shortly before being excluded from secondary education, and another in adult life when I reached breaking point post-PhD, finding that the only way I could work would be to spend long hours making up for lack of focus (or indeed, having hyper-focused on the wrong task). For a long time, I felt unable to pursue work in academia. I still encounter barriers in my interactions as a freelancer with other professionals, to the point of sometimes being pushed out of projects rather than discussing avenues for support. And left to my own devices, I watch myself oversubscribing and drowning in unrealistic commitments and cables, again and again.
It’s complex – the lines on my own responsibilities are often unclear. I have agency in these situations. And if I don’t initiate discussion around these situations somehow, then perhaps I’m not not creating opportunities for the shared responsibility that access often needs in order to work. I reject the “deficiency” part of the ADHD label in the same way I would reject attitudes to diverse body types as deficient – not that these are equivalent issues (for the record, re: ADHD, I prefer the as yet unrecognised terms Attention Regulation Condition or Creative Self Sabotaging Disorder, depending on the circumstances). In line with the social model, for the most part I do not find my immediate environment disabling. As well as having found an appropriate line of work, I recognise a certain amount of priveledge in this. So, asking if I identify as disabled, the answer remains ambiguous. However, learning about the social model has helped me understand this situation.
After posting an early version of this piece, I received this response from a friend and some-time labelmate Richard Wrigglesworth, who records as Tudor Acid:
The main issue I have in this area is that sometimes (in autistic communities, not sure what its like in ADHD communities) there is a tendency to assign success in music to your neurodiversity, where as for me equality REALLY is enough, and I think that the superhuman narrative is a huge pitfall!
This is an important point in terms of my own journey. My creative output is certainly influenced by the way my mind works in interaction with the resources available to me and my social environment. I would like this to be recognised, but not as a determining factor. I suppose this highlights part of the drive for me in attempting to unpick this, and may need to be the subject of a future post.
There’s so much to explore here, and undoubtably this is not new territory for a lot of people, but I hope this can provide a spark for some conversation. I am posting this in rough form in the spirit of openness and sharing ideas, so I hope to follow up soon and would love to hear what other people think of this.
This interactive installation was presented at the Old Truman Brewery as part of Hackoustic Village, November 2017.
Visitors were invited to explore a darkened room with interactive hand-held devices, created especially for the occasion. While holding down a button, the torch measured the intensity of light wherever it was pointed, which was translated into tangible vibrations. Upon releasing the button, the light would be played back from the front as a flickering pattern.
This meant that visitors could essentially take the flickering lights from around the room and move them to other places, shining them onto other objects that responded in turn with sound and more lights. Two visitors holding their own torches could pass light patterns to each other, or patterns could also be created by moving back and forth.
The overall piece was presented as a work in progress, a kind of open studio in an attempt to show my process and environment to passers-by. The most satisfying aspect for me as an artist, having proved that the technical concept worked, was the conversations that it brought about.
Key resources/reading list:
— This list is work in progress from last year. For those interested in getting historical context for sonic arts, I highly recommend Audio Culture by Cox and Warner. Many of the texts appear to be available on http://monoskop.org/Monoskop although I’m not sure about the copyright status.
Drone– Eliane Radigue / Glitch – Kim Cascone / Soundscape – Luc Ferrari / Conceptual – Susan Philipsz / Early Electroacoustic Music – Karlheinz Stockhausen & Pierre Schaeffer / Feedback & acoustics – Alvin Lucier / Sound Sculptures – Max Eastly
A playlist of versions of John Cage’s 4’33” – the “silent piece” (more info here):
This is some information on R. Murray Schafer and the notion of soundscape: http://ardisson.net/a/?p=180
And finally, a documentary from the BBC on the notion of silence: http://www.bbc.co.uk/programmes/b06386cs
R. Murray Schafer and Soundscape
Murray Schafer is a Canadian composer born in 1933 (not to be confused with Pierre Schaeffer, who influenced his ideas). He began forming ideas on soundscape in the 1960s and 1970s through his work at Simon Fraser University
Schafer compares the world to a musical instrument or composition, and refers to the “tuning of the world”, suggesting that we are responsible for our sonic environment and achieving a kind of state of harmony. According to Schafer, “noise equals power” (1977 chapter 5). In particular Schafer is concerned about the increase of noise with the advent of technology and industrialisation. His work has in part led to the development of a field called acoustic ecology – study of the relationships between organisms (people, animals, etc.) and the environment through sound. This often takes the shape of mapping and preserving sounds.
Schafer defines soundscape broadly as “any acoustic field of study”, including musical composition, radio programmes, or acoustic environments. This latter category is generally the focus of his writing and is perhaps the easiest starting point. He suggests that soundscapes can be analysed according to three key features or “event types” (these have been expanded over the years – see the Wikipedia links below):
According to Schafer, soundscapes can generally be classified as hi-fi or lo-fi, according to the amount of detail that can be perceived. With the constant noise of technology Schafer suggests that modern soundscapes are increasingly lo-fi.
- An Introduction to Acoustic Ecology (CEC): http://cec.sonus.ca/econtact/5_3/wrightson_acousticecology.html
- Sensing the City: http://www.david-howes.com/senses/sensing-the-city-lecture-RMurraySchafer.htm
- Monoskop: http://monoskop.org/index.php?search=schafer&title=Special%3ASearch&go=%E2%8F%8E
- Soundscape composition: http://www.sfu.ca/~truax/scomp.html
- Soundscape on Wikipedia: http://en.wikipedia.org/wiki/Soundscape
- World Forum for Acoustic Ecology: http://wfae.proscenia.net/
- World Soundscape Project: http://www.sfu.ca/~truax/wsp.html
- Interview with Schafer, covering soundscape and noise: https://www.youtube.com/watch?v=JX9VzICmKpA
- Schafer as a composer: https://www.youtube.com/watch?v=a6LsvEt952Q
- Iturbide, M. R. (2009). Structure and Psychoacoustic Perception of the Electroacoustic Soundscape. Ann Arbor, MI: MPublishing, University of Michigan Library.
- Schafer, R. M. (1969). Ear Cleaning: notes for an experimental music course. Berandol Music; sole selling agents: Associated Music Publishers, New York.
- Schafer, R. M. (1993). The soundscape: Our sonic environment and the tuning of the world. Inner Traditions/Bear & Co.
- Westerkamp, H. (2002). Linking soundscape composition and acoustic ecology.Organised Sound, 7(01), 51-56.