How Sound Can Transform Data with Hugh McGrory

March 17, 2022

Season 3, Episode 3

Listen to the podcast: Purpose in Retirement

Hugh McGrory: I mentioned earlier that what we already know about what sound does for movies and games needs to be brought in to how sound works as a communication system with data, because it doesn’t just represent what you see, it adds to that. And it brings in things that only music and sound and the textures of all of that can do.

Introduction

Scott Miller: This is a show where we’ll explore what it means to retire with purpose.

Juanita Fox: To make a difference, to invest in your family, your community, to live to your full potential and explore abundant opportunities to live with purpose in community.

Scott Miller: From Garden Spot Communities in New Holland, Pennsylvania, welcome to

Juanita Fox: Purpose in Retirement.

Scott Miller: I’m Scott Miller, the chief marketing officer at Garden Spot Communities.

Juanita Fox: And I’m Juanita Fox, the storyteller. In this season of Purpose in Retirement, we will talk with experts who will share ways that innovation and emerging technologies can improve the quality of our lives and help us to live with purpose and community.

Scott Miller: In this episode, we’ll be talking with Hugh McGrory, the founder of Sonify. It’s a company that uses sound and music to tell the stories of data.

Juanita Fox: Hugh partnered with Google several years ago to launch a website called Two Tone (that’s T-W-O, T-O-N-E) which makes data sonification accessible for free for everyone. We will talk with Hugh about that website and his current work to transform the way we access and learn about data.

Scott Miller: So in just a moment, we’ll talk with Hugh.

Juanita Fox: Hugh, thank you so much for joining us today. For the 2022 season of Purpose in Retirement, we’re talking with industry innovators and leaders like you who are offering new and emerging technologies that can help us discover opportunities to live with purpose and community. Hugh, you’re taking data and making music? I’m totally simplifying it here, but your efforts with Sonify are certainly challenging us to think about data differently.

Hugh McGrory: Yes, thank you. And firstly, thank you for inviting me on to talk about this today. What I’m working with is the idea around, you know, data is essentially the currency of the web, and data is growing exponentially. But the way data is presented is quite limiting. It’s only visual. So a lot of my work is based on the concept of “how could we add sound and how could we also take advantage of new behaviors and new emerging technologies to see if we can add value in any way to how we can present that data to people and drive better connections.”

Scott Miller: Yeah, and that is so unlike the way we typically think. It might be helpful for our listeners if you can talk a little bit about your background and experience.

Hugh McGrory: Well, you might have picked up a little bit of an accent. I grew up in the north of Ireland, a place called Derry, pretty much as far north as you can go. And before I moved to the U.S., which is about 12 years ago now, I ran a production studio. So we made films, both documentary and fiction, but we also worked in computer animation. And a lot of stuff, you know, where, you could just see how quickly digital was advancing. I’m sure you all know from your own experience that things improve, things get better, things get faster, things get hopefully easier to create as each year passes by. And we were very much involved in the early wave of how could we work with digital machines to expand creativity and look at how we interact with machines, what we can get them to do. But very much from the purpose of not just trying to make machines better, but trying to see could that allow us to create and communicate in different ways.

And a lot of that work bizarrely ended up– a real turning point for me was when I was invited by Yale University in Connecticut, their school of medicine, molecular imaging department invited me to come over and spend the summer of 2007 in residence with their scientists and, you know, their postdoctoral fellows, etc., who were doing research on cells.

Scott Miller: Ah, okay.

Hugh McGrory: And up until that point, my world for all of this stuff was images and sounds. And when I was at Yale and looking at the workflow, like, looking at how they did their imaging, that’s when data showed up for me for the first time. Because when they’re creating images, they’re not necessarily using a camera. And that would be true of NASA as well. In a lot of cases, what they’re doing is taking a large amount of data and creating an image from that data.

So that really opened my eyes in a sense to how, you know, on a real nuts and bolts practical level, all of the images that we see in digital space are just ones and zeros.

Scott Miller: Mm hmm.

Juanita Fox: Mm.

Hugh McGrory: All of the sounds that we hear are just ones and zeros. And data is also ones and zeros in its pure form.

So data can be made to look like or sound like or react to essentially anything. But when you look at the entire field of data and how it’s communicated, it seems almost self-policed by experts into a corner or wrapped in a bubble where nobody seems to be able to think beyond creating pie charts or bar charts as the best way to explain this.

And, you know, those are both good ways to explain data, but they’re certainly not the only way to explain data. And a lot more is possible. And, you know, we all just lived through two years of a pandemic where in the early stages, access to quality, timely, trusted data, it was literally a life or death issue in the early stages of the pandemic.

So there’s a lot at stake here to being able to improve these systems and, you know, there’s stuff in development as well in every city in the world. The Internet of Things is literally sensors put into everything. So the amount of data that’s going to be picked up by a city or a community, is going to grow exponentially.

And then we’ve got new behaviors, like people listen to books, people navigate with GPS. It’s just convenient to be able to add sound, for instance. But also we’re going to be moving into a world of self-driving vehicles, etc., and our whole world is being designed for, certainly in America, for the automobile.

So there’s big, big changes afoot and what do we do with all that data? How can we empower people? How can we make their life a little bit easier or inform them, or make them more engaged? And I’m not necessarily sure that texting people bar charts is going to be the best way of dealing with our near future.

Scott Miller: I would agree with that. What tool are you building right now? What do you have the opportunity to work on?

Hugh McGrory: So a few years ago, we built a tool with support from Google News Initiative. We built a tool to turn data into music. And that tool was called Two Tone, T-W-O, T-O-N-E.  And if you put a dot I-O at the end of that, that’s the web address. So twotone.io, anybody listening can go on there. It’s free. It’s really, really simple to use. And we created it because data sonification is the term for turning data into music. And we didn’t invent that. That’s been around for decades. It’s used in science labs. It’s used by NASA, like I mentioned. It’s used in financial markets, etc. But a tool that was available online didn’t really exist.

Juanita Fox: Can you explain data sonification and what does that mean?

Hugh McGrory: So data sonification, if you look at how it’s explained, I don’t necessarily agree with the strict definition of how it’s explained. So if you go to Wikipedia or something like that, it’ll tell you that data sonification is the use of nonspeech audio to convey data or perceptualize information. And by nonspeech audio, obviously they mean songs or music, that’s pretty much it. What’s interesting is that at the end of last year, I had the privilege of being invited to speak on a panel at Stanford University about the future of the web as a voice web where, you know, we’d maybe be able to speak to the Internet and why can’t we, you know. That natural language processing. Artificial intelligence is easily becoming able to understand spoken language. You know, we have Google Home and Amazon Alexa and all these things. Why can’t I speak to the web and order a table at a restaurant, etc. But what’s interesting there and why I mention that is that that’s called the voice web. And where sonification kind of defines itself strictly by not using voice, voice defines itself by not using sonification.

Juanita Fox: Ahhh, okay.

Scott Miller: Okay.

Hugh McGrory: It limits itself to language, and that has a limitation as well. And when you talk to the scientists engaged in voice, they say, “well, you know, we’re wrestling with things like how to build a human connection and how to convey emotion.” And that’s actually the definition of what music does.

Juanita Fox: Mmm, mm hmm. Yeah.

Hugh McGrory: Music’s strongest thing is conveying emotion, conveying a feeling. And also, you know, it has amazing power that’s been very well researched to build human connection and social pawns and that stuff.

So I think because I’m coming in as an outsider, I’m not a data scientist, I see the bubbles, if you know what I mean, that people work in. I see that voice is limiting itself, and it doesn’t need to. Sonification is limiting itself and it doesn’t need to. Because- I’ll talk more about this later, but I’ve just done a year-long project with the blind and visually impaired community. And they have very much told me that adding human voice to data sonification makes it way easier to understand, and that just makes sense. You know, because it’s giving it context.

It’s- you know, I don’t see why we shouldn’t combine these things, rather than using strict definitions. So in a sense, when I’m asked to define sonification, I’m like, “we need to broaden that definition. We need to bring in not just turning data into music, but adding human voices, adding what we know about sound design for music and games and how that can convey feelings and moods and that kind of stuff.” And then also what we know from music theory itself.

Scott Miller: You mentioned the website Two Tone that you had done. And now we’re talking about, you know, sonification. How does your work now take that idea from Two Tone and improve upon it?

Hugh McGrory: Well, people love Two Tone, but they sort of miss that it was just an experiment. Two Tone wasn’t built to be a solution of anything. It was kind of- it’s really fun. And I can teach anybody how to use it in 5 minutes. It’s ridiculously easy. If you can use Instagram, you can use Two Tone. There’s not a lot of steps and it walks you through. I first demoed Two Tone to a class of nine-year-olds and they were using it in 5 minutes.

Juanita Fox: Early adopters, nine-year-olds.

Hugh McGrory: Yeah, so that software doesn’t have to be complicated. Like, lots of people try to make it complicated, and that’s old fashioned. Like, if you look at something like Adobe Photoshop, it’s terrifying, there’s so many drop downs and all these different things that you need to know. And compare that to Instagram or TikTok where you just have a filter and that’s it.

Juanita Fox: Yeah.

Hugh McGrory: So Two Tone’s super easy. But it was sort of designed to be a general purpose tool. So it’s not created for a specific task. Like it’s, you know, it wasn’t built to be accessible for blind people, for instance. And then we ended up working with them to find out how we could build something useful that was. So we’re working on that now.

Two Tone isn’t a great tool for scientists or musicians or any niche community. It’s just something that demonstrates the process. We’re also working on new software, which looks at how can we generate both audio and images directly from a spreadsheet or the data set.

So we’re thinking about that in a slightly different way that’s, right now, the way we’re thinking could be called audio-first. So it’s not audio-only, and I’ll give you an example. An audio-first system would be GPS in your car. So, it’s an audio system that happens to have pictures that are, you know, they’re not Hollywood movie pictures. They’re graphics that maybe have little dots on them and things like that, and make, you know, reflect in a different way what you’re hearing.

Juanita Fox: At one point, you had given the analogy about the 1920s and how movies were silent, and that was just fine.

Hugh McGrory: Yes.

Juanita Fox: Can you talk a little bit about that evolution of music in movies and how you see this impacting how we view data?

Hugh McGrory: Yeah, it’s a useful analogy. So pretty much 100 years ago, sound came into movies for the first time, it was the 1920s. A movie called The Jazz Singer is usually credited as the first sound movie in Hollywood. And prior to that, we had, what was it, 40 years or something of movies that didn’t have any sound. And, you’d go and see a silent movie. They were called silent movies, kind of the way right now they call things data visualization, as if it can only have one sense. So back then, you know, that was just the way it was. It didn’t have sound, but it also didn’t have any color.

Juanita Fox: Mm hmm.

Hugh McGrory: And, you know, I used to teach at university back in Belfast. I taught at Queen’s University there in the film department. I taught modules on the history of cinema. And if you look back at that time, people were like, “why would you want to add sound?”

Juanita Fox: Hmm.

Scott Miller: Mm.

Hugh McGrory: “Why would you do that? That’s ridiculous. Who wants to hear what an actor is saying, who wants to do these things?” And sound, you could argue sound had a detrimental effect on the visual, because prior to when they invented sound, like if you think about a silent movie in your mind, they’re kind of crazy. You’ve got like 30 people on top of a moving car, and you’ve got, like, people hanging off buildings and, you know, all sorts of wackiness and stunts and anarchy in a sense. And then you contrast that to the 1930s and everybody’s standing still and there’s two actors and there’s a large plant in between.

Juanita Fox: Where the mic is.

Hugh McGrory: Exactly. So sound made everything static and sound made everything more theatrical. But people loved it and it brought so much interest from the public. What’s interesting when you examine the sound in movies is that most people think that the sound in movies is the actors talking and a little bit of music. But the sound operates in a very different way and a very, like, subconscious, subliminal kind of way to make your feelings go up and down. And that’s called sound design, and it’s a very well-developed art with lots of different principles that- I mentioned briefly earlier that what we already know about what sound does for movies and games needs to be brought in to how sound works as a communication system with data, because it doesn’t just represent what you see, it adds to that. And it brings in things that only music and sound and the textures of all of that can do.

Juanita Fox: Anything else you want to share with our listeners before we leave?

Hugh McGrory: With your listeners, I just wanted to say that, you know, please become part of the conversation. So come to our website, Sonify.io. Access the free tools, sign up for the lab. And in the lab, people talk to each other, and people talk to us and they say “oh, this didn’t work,” and someone else will tell them how to fix it. Or they’ll say, you know, “did you ever think about this” or “what really annoys me is that.” And that’s when we learn. And certainly age does not come into it, because we think that systems should be designed for everyone and they should work for everyone. And the last systems and the systems that we use right now for the web and digital communication, they didn’t think about people with disabilities. They didn’t think about seniors. They didn’t think about how to make tools open and accessible. And we need to, going forward. If we’re going to redesign these things, we should redesign them to be as inclusive and useful as possible.

Scott Miller: Well Hugh, thank you so much for joining us today. This has been, you know, just incredibly informative. And so, you know, personally, I’ve never thought of data in this way or as something that could really be incorporated into sound. And so thank you so much.

Music

Juanita Fox: Hugh’s work with data sonification is incredible. When I first learned about the concept, it was a little hard to understand. The Two Tone website, available at twotone.io, however, gave me a personal experience with data sonification and quickly helped me understand what he is working to do.

Scott Miller: Hugh’s interview on the podcast was edited for length and clarity. Our full conversation with him, however, was absolutely fascinating. So if you’re interested in hearing the full 40-minute interview, we have posted that on our podcast page at www.gardenspotcommunities.org/podcast.  

Juanita Fox: Through our conversation with Hugh, he talked about the opportunities for data sonification. We summarized those opportunities in a PDF entitled Five Opportunities of Data Sonification, and you can find a link to the PDF in the podcast description. The PDF list the benefits. One: incorporate alerts. Two: quickly hear changes. Three: inclusivity. Four: interest. And five: transformation.

Scott Miller: So thank you for listening to Purpose and Retirement. I’m Scott Miller.

Juanita Fox: And I’m Juanita Fox.

Scott Miller: Special thanks to Hugh McGrory for joining us for this podcast.

Juanita Fox: Our senior producer and host is Scott Miller.

Scott Miller: Our co-host is Juanita Fox. And our producer is Sharon Sparkes.

Previous ArticleSharing Talents with Quilts of Valor Next ArticlePersonal Connections in Kenya