H1: Brad Pitt Wants to Know "What's in the Box": How Technology, Rhetoric, and Disability Studies Play a Key Role in Breaking (open) Black Boxes

Alyssa Hillary's proceedings draft is a cleaned up version of their transcript. The original transcript can be found here. Comments and suggestions on the draft are open here.

 

The idea of the black box started in engineering and sciences. There are many pieces and processes involved in even simple seeming engineering and science projects. Behind each piece, there are often many formulas and principles. There's just too much for one person to know how absolutely everything works. Within a given field, or project, or even at the level of a specific person on a project, we have to prioritize which details we take the time to understand and which processes we're willing to use a “black box” solution for. We care what goes into the process because if you put garbage in, you get garbage out, and we care what comes out because that's the information we need. Everything else is someone else's problem. Someone needs to care about what happens inside that black box, but it can't be everybody all the time or very little will ever get done.

 

Then the humanities start talking about black boxes. We have cross-fertilization between disciplines, which is great. At least, the humanities start to formally talk about black boxes, with those words. The use of black boxes, and the questioning of what we pay attention to vs. what we don't pay attention to has been going on for a lot longer than the use of the phrase has.

 

But what do different people, different fields, different disciplines, different interdisciplinary communities, black box? That depends on where we stand, where we sit, where we retreat to when it's time to curl up in a ball and shake.

 

As an engineer and as a disability studies scholar, I see quite a bit of it going on. Even as humanities scholars work with digital media and use technological tools to study media, we don't necessarily understand the environments we work in or the tools we work with. Technology tends to get black-boxed, and that can effect how we understand our results, as well as our ability to decide how to proceed when something goes wrong with our technology, or even recognize that something has gone wrong when the process is giving bad output rather than crashing entirely.

 

Then there are engineers and scientists and generally people working with technology. We tend to black box, tend to ignore the cultural forces behind our technology. We notice how our technologies are intended to be used, but there is less attention paid to the uses we didn't originally intend for the design. We don't think about why our technologies are going to get used the way they are.

 

In the humanities, we often treat technological developments, or we have often treated technological developments, as something that happens and the technologies appear from just about nowhere. So we're treating the engineering process as a bit of a black box. Time goes in, technology comes out. Or, for mathematicians, coffee goes in, theorems come out. Why are certain areas of study getting more attention? Why are certain areas of research funded better than others? There's usually less attention to these questions – not none, but less.

 

Now, in the digital age the amount of technology we have to deal with increases. So what do we treat as a black box, why do we do it? In the digital humanities, we use a lot of software. We use the Internet. How does the Internet work? It's a hodge-podge, we've got websites here, websites there, different protocols, http, https, fttp, and on and on, and I don't think anybody knows how all of it works. Practically speaking, some of the Internet has to go in a black box. Those who better understand more of how it works tend to be concerned about the amount of spaghetti code and the ways that networks are kludged together. And this is what we're working with. How much do we treat as a black box and how much do we try to understand? We're stepping into the public eye with results and tricks and tools to produce them. “We who step on stage should know how the trick is done. The loss of innocence is the price of applause.1” We should know how our tricks are done, as much as possible. Practically speaking, which ones can we understand?

 

We make statistical analysis of online texts. If we don't understand what our statistical analysis program is doing, how do we know if it's any good? How do we evaluate it with no idea what's in the box? But, how much time do we have to figure out what's in the box? Who are we trusting, and how hard would it be to take the time and energy not to trust them?

 

Some of our software? Maybe we don't try and get into the nitty-gritty of how our web browser works, maybe we don't try and figure out why Google's giving us the results it gives us – that's a hugs black box, it's proprietary, good luck figuring out exactly how it works, though search engine optimization exists, and that's essentially trying to open the box enough to affect results. Even if they don't figure out the entire algorithm, search engine optimization is an attempt to at least partially open a black box.

 

And there's our home disciplines Now, when we're interdisciplinary, or when we're trying to be interdisciplinary, generally we've got one or a few “home” disciplines. Not as in, there's a few specific disciplines where people can do interdisciplinary work. That can and does happen everywhere. But consider your home discipline or disciplines, in the specific. I am a mathematician, an engineer, and a disabled person. No matter what I'm approaching, I'm still thinking, to some extent, like a mathematician, like an engineer, like a disabled person. I bring those with me, and what things I'm likely to treat as a black box in my other work relates to my experiences and knowledge as mathematician, engineer, disability studies person.

 

When we understand a process, or we feel like we could understand a process without too much trouble, we're less likely to treat it as a black box. So the more something relates to one of our home disciplines, the less likely we are to black box it. Similarly, we're more likely to black box what we don't understand! As an engineer, I'm more likely to try to get into the nitty-gritty of technology than someone who's original training was in literature. They're more likely to get into the details of word and phrase choices, and of literary references to important cultural events, than I am. When we are trying to do work that spans several traditional fields, we'll want to get into the details for things we would normally black box, if not entirely ignore (as in, maybe we don't even worry about the input or output for certain processes we judge to be tangential.) How do we do so?

 

Working with people across disciplines can work. Forming partnerships across potentially atypical combinations of disciplines can create insights that we wouldn't find with more “standard” combinations.

 

Perhaps Sam, who also presented on this panel, can unpack the black box of how do we get from one piece of rhetoric to the next, constructing spaces of advocacy and asking, who is this organization even advocating for2? And then, he's unpacked that so I can understand it. I come back and say, now as an engineer I'm positioned to understand the design process for new devices, new technologies. I'm going to get into the question of how do we go from our current rhetorical position to the technologies we're creating from that position, from . How does this lead to given technologies and how do we use those technologies. I'm an engineer, I'll unpack that part.

 

Which gets me into technology, people with disabilities, and how we get technology that's useable by people with disabilities. Again, I'm a disabled engineer. I'm going to look at questions that are relevant to disabled people and to engineers. What's in the box?

 

Who is this really for? Often times, when we're designing technology that, at least in name, is for people with disabilities, is for disabled people. We call this asssitive technology. Who's getting asked what the needs are? Usually, caretakers and professionals are the ones being asked. It's often not the end users, disabled people, who will go home and use these devices regularly. You get people advertising technology for communication supports based on testimony from parents, professionals, teachers, but not the people who are using these applications, the people for whom these applications are our primary voices. You get people advertising devices like the reader must be a caregiver, a professional, or just generally not the disabled person who is going to use it. They market to the people they expect to give them money, which means convincing insurance companies to cover the device or application and which means convincing doctors to prescribe it, but which often doesn't require convincing a disabled end user of anything3.

 

So who get's asked what our needs are? Who's designing our technology? That's another black box that I'm situated to unpack. See, I know of a group that's trying to design technology to track environmental factors related to meltdowns for autistic people and as far as they know, I am the first autistic person that they've met. So who's designing this, and who are they talking to, if, prior to me, they aren't talking to any autistic people? In practice, who is, the end user? I don't know. It clearly isn't us.

 

Where do we stand? Where do we sit? What's a black box, and to whom? I want to design a communication support that treats some communication problems for autistic people, for neurodivergent people in general, as problems of translation4. My friends who know me better can understand me more than strangers can. Because they know from experience what context I'm trying to put in, they know how my syntax changes under stress. They know how my communication tends to differ from standard, white, abled, middle-upper class, neurotypical, cis-het, every other kind of normative speech there is. They can translate from what I'm saying to what they should understand. They can also do this for other people who don't know me as well.

 

They also aren't usually present. I almost always have my laptop with me, though. I almost always have access to the internet. Many people almost always have smart phones with them. And Google Translate exists. It's not perfect by any stretch, but it exists. Can we apply machine translation or computer assisted translation to translating between neurotype dialects? To answer this, I must get into the nitty-gritty of how computer assisted translation works, in a way that, if I were just trying to translate between two “standard” languages, I wouldn't need to understand, because I'm using the software out of the box, as it stands. I have to crack open that black box.

 

It's about what our position is, in the world, what we're trying to do, what we're trying to understand. That tells us what black boxes we need to open up and ask, what's in this box, and which ones we can leave alone.

1Qi. “Case 195: The Magicians Code.” The Codeless Code: Fables and Koans for the Software Engineer. 15 June 2015. [http://thecodelesscode.com/case/195]

2Harvey, Samuel T. “A Rhetorical Journey Into Advocacy.” MA Thesis. St. Cloud State University, 2016. Web. [http://repository.stcloudstate.edu/engl_etds/54/]

3Hand2mouth. “What do you want? Part 2: Who's 'You'?” Hand to mouth: Assistive Technology. 12 Dec 2010. Blog post. [https://hand2mouth.wordpress.com/2010/12/12/what-do-you-want-part-two-wh...

4Hillary, Alyssa. “On Cognitive Interpretation Software.” Society for Disability Studies Annual Meeting. Hyatt Regency, Atlanta, GE. 13 June 2015.

Session abstract or description: 

Black boxes are a concept that originated in engineering and science. The idea is that a process can be so complex that the input and the output represent the most salient aspects of that process. This panel will critically analyze black boxes using disability studies and theoretical frameworks from various disciplines such as rhetoric, writing pedagogy, and engineering in the hopes to show that black boxes generate provocative questions for interdisciplinary research. In particular, our panel directs attention to the ways in which disciplines can become black boxes that obscure difference, identity, translation, and embodiment.

Session type: 
Panel
Session hashtag: 
h1
Session room: 
Basil 216
Session time: 
Concurrent Session H
Session narrative: 

Description of session image: A profile view of Brad Pitt from the movie Se7en. In this photo, Pitt sports butterfly stitches and cuts on his face. White text contours him, bearing the following message: Want to know what's in the box? Find out at #h1 #cwcon Saturday, 5/21 at 4:30pm, Basil 216 #cripthebox.

Link to Alyssa Hillary's presentation: https://www.youtube.com/watch?v=nxnoPpJ7KYg&feature=youtu.be

Link to Sam Harvey's Google Slides presentationhttps://docs.google.com/presentation/d/1oW2erngJkdv2kWX6Qoe_-9QG8oE1ioiH2F8tLLWy_6U/edit?usp=sharing

Proceedings participation: 
yes

Comments

Samuel Harvey's picture

I have a transcript of my presentation at https://alyssahillary.wordpress.com/2016/05/24/computers-and-writing-201...

Tentatively my proceedings plan is to edit the transcript to 1) be more formal (though still understandable) and 2) make my citations/connections to other work explicit.