This post is based on a keynote presentation by Dr. Max Liboiron at the Citizen Science Association bi-annual meeting in Raleigh, North Carolina, on March 14, 2019. It has been modified for the written form. 


It’s an honor and a privilege to be on this land here in Raleigh, North Carolina. My name is Dr. Max Liboiron and I run CLEAR, the Civic Laboratory for Environmental Action Research, which is an explicitly feminist and anticolonial laboratory that studies marine microplastics. But I’m not going to spend a lot of time talking about what we do with plastics. I’m going to talk about how we do it because I think that has better legs and is more useful for a greater variety of people in this room. I think I was invited because of the “how” of our work, rather than that what. First: we do environmental justice work in academia without appropriating environmental justice work.[1] That’s very important and I think actually quite rare. Number two: we have an extremely diverse lab, not just in terms of who is there but how there are there–as decisions makers, as leaders, as co-teachers–that has gained us a lot of attention. And three: we do research with people who don’t have science degrees and that science works–and by “works” I mean that people without science are invested in the work they do (because it’s theirs). They’re invested not because it’s my really good idea and I invite them to come in, but because the projects are theirs and I’m there to facilitate it. Also, the data we produce is usually useful for things like scientific peer-reviewed publications. So I’m going to talk about how those things happen.

A primer on power


I’m going to start by talking about values and power. I take one of my cues from this excellent paper by Mary O’Brien from 1993 called, “Being a Scientist Means Taking Sides.” Mary O’Brien is a feminist biologist, back when being a feminist biologist wasn’t cool. It’s now super cool. O’Brien says that as soon as you make any scientific decision, you’re doing political work. As soon as you choose to ask one research question, you’re choosing not to ask other ones. That’s not inherently good or bad, but you do start to align yourself with some things and not others. Then you choose who you’re going to work with: either fancypants senior scholars with degrees or people who don’t have degrees. You’ve aligned yourself again. What metrics do you use? Risk assessment, for example, is a metric that assumes some amount of harm and pollution is going to occur and your job is to adjudicate how much, versus other forms of metrics that don’t assume that some harm or pollution will happen.[2] Where do you get your grants from? How and with whom do you share your knowledge?

All of these are decisions that align with some things, some groups, and not others. They reproduce certain values. Science isn’t value free. I think most of us know that intuitively since science purports to values things like objectivity, validity, and replication but also things like the autonomous, individual hero-thinker, pioneering adventurism (I use those words intentionally), and valiantly getting your data at all costs. Those ideals reproduce certain relations.

When I’m talking about values and alignments, I’m talking about power. When I say power, I don’t mean someone with a stick coercing people to do things (although that is a manifestation of power–certain sorts of people tend to be the ones with the sticks). What I mean is the way that some things seem natural and normal, even inevitable and carry on easily, versus things that are very hard to do, that don’t make as much “sense,” that don’t tend to happen. The same things happen when we’re making your scientific decisions. Certain research questions get asked a lot. Certain ones don’t. Certain kinds of people get worked with a lot. Certain kinds of people don’t.

Power is more like infrastructure, not decisions or behavior–more like how some decisions and behaviour by some people are allowed to happen, valued, reproduced, and others are harder to do. Some ways of doing things, some forms of knowledge, just flourish. Often at the expense of others. That’s what power is.[3]

So as you’re reading, there might be moments where in your gut where you’re like, “nope, that seems impossible.” Or, “what?! No.” There’s a good chance that that’s a moment of meeting up with power. That gut feeling doesn’t mean you’re a jerk or that you’re wrong–it’s just a potential way to point out how power makes some things seem possible and good, and other things seem impossible and gross. So try paying attention to those things as I go through– that gut feeling is one of the main ways that I learn.

When it comes to values and reproducing somethings and not others in science, I work very hard to produce a set of values that aren’t usually foregrounded in science.  I’ll talk about two of them today: equity and humility. Equity comes out of a lot of different social movements. Humility comes out of the teachings of my Elders. I’ll spend the rest of the time talking about how equity and humility are guiding forces in CLEAR’s scientific work.

Choosing Equity


First of all, there’s a difference between equity and equality. They often get conflated. Equality means treating everyone exactly the same. It’s essentially a math and distribution problem–it’s what most people say when they talk about fairness. “That’s not fair” usually means that everyone wasn’t treated exactly the same. Equality can be deeply inequitable.

Equity doesn’t strive to treat everyone the same– it doesn’t strive for fairness. It recognizes that people start from very different social, political, and economic locations and tries to both addresses and overcome those unevenesses.

I’m sure it’s easy to imagine how equity might work in citizen science; that we might try and get certain types of people– Black, Indigenous, people of color (BIPOC), women, junior scholars, people without degrees–involved in our science. That’s not what I’m talking about. The inclusion model often is a model of equality, where it brings people into a space that’s already not designed for them. It treats everyone the same, bringing them into contact with accredited science. We already know this doesn’t work– you can bring many women and people of colour into science and they still “fall out of the STEM pipeline” because that pipeline is built for someone else.

Equity in scientific instrumentation

I’m talking about a different model. Let’s look at one way that equity can manifest in technology creation.

Participants from 5 Gyres and Plastic-free Bermuda conduct the universal method for collecting shoreline marine microplastics in Bermuda 2015. Photo by 5 Gyres.

This is a group of us in Bermuda. We’re conducting a universal, scientific protocol for collecting microplastics on shorelines. This universal protocol is out of UN Environmental Program and out of NOAA (United States). You go to the shoreline and lay down a transect (which is a very long line) and you put quadrats (which are squares) at random locations on it. Then you scoop the top 3-5 cm of sand out of the square and you put it through a sieve so plastics and other items are left in the sieve. Now you can say how many plastics were there per unit of space get a density measure. Universal protocols are really nice for science because it means we can compare things across contexts.

This is a typical shoreline where I live and work in Newfoundland:

The shoreline along the Quidi Vidi Gut, St. John’s, Newfoundland in April. Photo by Max Liboiron.

I dare you to sieve that.

In fact, I can tell you from experience that you can’t even walk on it without sliding feet-first into the ocean, never mind laying a transect. We find over and over in Newfoundland and Labrador that almost no universal protocols work for us in marine plastic research.[4] So we spend a lot of time building our own instruments.

We have guidelines for our instrument development that are designed to be equitable.

  1. They have to work in Newfoundland and Labrador.
    This seems pretty obvious, but this kind of place-based design is rare, especially given that our place is remote, mainly rural, very windy and icy, and isn’t characterized by robust and always-working infrastructure.
  2. Made of materials you can get in rural Newfoundland.
    Many rural villages and towns, if they have a store, have a single general store. We have lab members who have family in these places take photos of the hardware sections of their general stores, and then only build materials out of what is in those photographs.
  3. Doesn’t need electricity to work or be built.
    Most places in the province have electricity, but it doesn’t work all of the time. There certainly isn’t wifi and cell service coverage you can depend on.
  4. Costs less than $50 to make.
    Newfoundland and Labrador has been one of the poorest provinces in Canada for most of its history. While one person might not have $50 of expendable income to put towards an instrument, we assume that a group of people who share a concern could pool $50.
  5. Can be built and used by people outside the ivory tower.
    We build using common tools, and our “how to” guides are written and illustrated for audiences outside of academia. It’s not much use to build something that needs a degree to operate.
  6. Open source licensing & online.
    All of our instruments are open (as in free and available to use), and instructions to build and use them are online. [5]
  7. Repairable yourself, with local materials.
    Our environment is pretty rough and will break most technology, so black boxes and rare materials are not on the menu.

These guidelines are equitable in two ways. First, they are designed for the specificity of the place we work in. Universal tools that assume that “anyone” can use them don’t take into account the unique contexts here. These do. This means that sometimes our tech won’t work in other places–they are designed for here! What counts as equity is not universal. It is place-based.

Secondly, these particular guidelines make it so anyone can use our instruments to answer their own research questions. Accredited scientists and our institutions are not an obligatory passage point for doing science in this case. This means that if community groups or others want to work with me, they can. But they do not have to work with me if they want to do research on plastic pollution. That is a much more equitable relationship.

So what does this look like in terms of an actual scientific instrument? Below is a Manta Trawl.

an instrument with metal wings and a metal mouth that ends in a net at the end.
The manta trawl in action. It’s the scientific standard for collecting marine microplastics from surface water. Photo by 5 Gyres.

The manta trawl is the scientific standard for looking at surface water plastics. Roughly half of all plastics float in water, and of those, roughly 90% are microplastics smaller than a grain of rice, so they’re hard to see and you need special instruments. In the manta trawl, plastics go through the mouth of the trawl as the wings stabilize it. Plastics go into the net and collect at the end for your sample. The manta trawl is $35,00USD. There’s only a few places in the world that make them.

What we built instead is BabyLegs. It’s called because it is made with baby tights. Water goes through the container and into the tights it’s where it collects at the toes. There are empty soda pop bottles on the sides to keep it stable. BabyLegs costs $12CAD. We actually have a range of different surface trawls–BabyLegs, the LADI trawl, the Ice Cream Scoop— that are built for different environments and different goals that people might have for their research, and they’re all open source.

Screen Shot 2019-03-17 at 5.36.02 PM
BabyLegs, CERN Open Hardware License. Created by Max Liboiron at CLEAR. Photo by David Howell for MEOPAR.


The deficit model: a tale of pomegranates and cod

The question I get asked the most when I talk about our a suite of surface trawls (and also because my day job is to talk to researchers about Indigenous research partnerships) is the question of capacity: “Well, how do you know that people have the capacity to use these instruments correctly and make good data?” “How do you know that Indigenous people have the capacity for systematic knowledge production like science?” I get asked so much that now I have a story I tell about that question.

When I first came to Newfoundland I came from New York City where I had lived for 15 years. Culture shock is an understatement for what that felt like. One of the ways this culture shock manifested was in rage and despair about there were no pomegranates in Newfoundland. In New York, every day I would get a pomegranate from the bodega by my apartment out eat it in front of the TV. It was very nice. And there are no pomegranates in Newfoundland except a week at Christmas from Costco. I was like, “Why can’t Newfoundland manifest a pomegranate?!” I know we can’t grow any in a cold and partially subarctic region, fine, but why couldn’t we just get a pomegranate here? There are boats everywhere–why can’t they just ship me a pomegranate? I started having thoughts about how backward Newfoundland was and how they didn’t have the capacity for pomegranates without recognizing the fact that Newfoundland is not a pomegranate place. It is a cod place. It is a cod center for the world. Some of the best, most delicious cod are from Newfoundland. I was using a pomegranate measuring stick for a cod place and that ensured the place would never come up to that measure properly.

So when people talk about capacity issues, they’re often using what’s called the deficit model, where the place or group or person is not quite up to snuff. There are expectations and the place or group or person cannot meet them. So children are always undercooked adults. Citizen scientists are always not quite scientists. Traditional ecological unknowledge is underdeveloped science. It’s using the pomegranate stick for a cod place. They will always be in deficit because there has been a category error.

I think would work out better is if you understand cod places as cod places and children’s knowledge as children’s knowledge and citizen scientists as citizen scientists that are different things than pomegranates and adults and accredited scientists.  Because if you always measure it by that by the other thing it will never match up with the deficit model.

Fishermen kick data butt

My first lesson in the deficit model was when I did a participatory citizen science project with a biologist named Yolanda Wiersma and our student Matt McWilliams.  Participatory citizen science is where instead of accredited scientists determining what the project is about, they just facilitate and grease the wheels of other people’s needs and knowledge. We went into Fogo Island, which is a northern fishing community on an island off the north coast of Newfoundland, and we said, “What are your concerns.? How do we articulate those as research questions? How do we get data to answer them? How do we analyze the data together?”

The fishermen (“fisherman” is a term used by fish harvesters of all genders, so I’ll use what they use) decided to look at temperature because they thought the fish were coming out of season because of water temperature changes because of climate change. So we gathered temperature data by putting temperature loggers on their fishing gear and then we, the accredited scientists, put the data together and made some graphs of average temperatures to analyze together. And the fishermen said, “Can we just see the data?” and we’re like, “the raw data?” And they said, “yeah, give us the spreadsheet.” I thought that was odd because I don’t hang out with spreadsheets and look at them to analyze them– I give it to R or another program that tells me things.  But we gave them the raw data on spreadsheets and they sat eating donuts and reading the raw data and told us things.

Fishermen are expert samplers– basically, fishing is judgmental sampling and if they’re not good samplers they do not have a livelihood. And fishermen keep catch logs of what’s going on– the weather, date, where they are catching fish, how many fish. And they have these logs for generations of fishing. They’re basically handwritten datasheets and they study them all the time.  It’s what they excel at. They extract data that we couldn’t have seen because it’s contextualized and they have a relationship with the data. Also, they saw things in there that they could use for their fishing–things they didn’t share with us. None of our business. They found it valuable and they have the data and are using it.

picture1-1So in the scenario above in the equity diagram, I think a lot of people imagine the little purple guy to probably be a non-scientist of color who’s a woman with a disability or something. But sometimes it’s a scientist. Not in terms of privilege and oppression–because accredited scientists will almost always have more privilege than some of the unaccredited knowledge producers we work with–but because sometimes our social location means we don’t know things and we can’t know things because of how our privileges steered us in certain ways.

Some of us have been having this conversation for literally hundreds of years. First Western science and then citizen science are latecomers to the knowledge game. There are a lot of other types of robust, systematic ways of knowing out there–Traditional Ecological Knowledge that belongs to Indigenous peoples, local knowledge that belongs to fishermen– and you don’t want to accidentally measure it by the pomegranate stick.

So equity is contextual. Just because there’s inequity does not mean there is a knowledge deficit.

Doing humility

Just like equity and inequality get conflated a lot, modesty and humility are often conflated. Modesty means you don’t talk about how great you are and the great things you’ve done because it might elevate you over other people and you don’t want to do that because that’s rude. That is true.


Humility is a little bit different. The way it was taught to me, humility is one of the ways to talk about how we’re all connected. Everyone’s connected– so we can’t be here in this room today at the Citizen Science Association in Raleigh unless someone is looking after our kids, our dogs, our plants, our students, our classrooms, our offices, our labs. And thank you to those people. We can’t do the work we do without standing on the shoulders of others– we have administrators, a janitorial staff, an I.T. staff. I couldn’t do this talk/blog post without those people. To say I do knowledge production that is just my knowledge production is off, because it requires a lot of people. Humility is about recognizing that and honoring that. Sometimes when you’re modest and not talking about the things you’re doing, you are not being humble because you’re not recognizing those other people and even non-humans whose shoulders you stand on.

So what does that look like in our science?

It takes a village to write a paper

CLEAR has a process called “equity in author order.”[6] It centers both humility and equity. In authorship on scientific papers, certain sorts of people never end up very high or even at all on the author list. Certain types of knowledge production is not often given credit. So CLEAR developed this process whereby we try to be more equitable and humble in that process.

Humility map for author order CLEAR.png
Network diagram of people’s names and how they came to know about the scientific project under review, with links to one another. Photo by Max Liboiron.

There are between 18-24 people in CLEAR at any time, which is a pretty big lab for a junior scholar without tenure like me. This diagram documents a conversation where we went around the table and asked people, “how do you know this project?”. So one person says, “Oh I’m not involved in the project but I heard about it at the lunch table and just was talking about it with Jess. We had a chat.” And Jess says,  “I remember that conversation– you gave some really good ideas!” So perhaps that person did contribute to the project. And the next person says, “I know about this project because it’s part of my thesis and I did the statistics for it. I had some trouble with one of the R packages so I went to the library and got some help.” Now we add the librarian to the map and draw a line between the two people. And so on. In the end, this is a map of humility, of the connections we had that made the project possible, including people who wouldn’t normally be recognized. It helps people to see that we are not independent geniuses and they are always in connection with others in all acts of knowledge production.  You may notice that our papers have rather long lists of coauthors.

Another part of humility is recognizing forms of scientific knowledge production that don’t usually get noticed or credited. Like cleaning. At CLEAR, cleaning is really, really important because when you study microplastics, the microfibers from synthetic clothing have a real potential to contaminate our samples. It’s someone’s job–we call them the fleece police– to make sure that no one is wearing fleece in the lab. They also make sure everything is really clean and everyone pitches into cleaning. If we don’t do that we don’t have valid science. So cleaning is something we recognize as scientific labor, but there are lots of other ones as well: organizing meetings, doing care work, cleaning data. This relates back to equity because the people who tend to do that work tend to be junior, women, etc.

A very long list of authors with small print
A partial author list from: W. Leung, C. D. Shaffer, L. K. Reed, S. T. Smith, W. Barshop, W. Dirkes, M. Dothager, P. Lee, J. Wong, D. Xiong, H. Yuan, J. E. J. Bedard, J. F. Machone, S. D. Patterson, A. L. Price, B. A. Turner, S. Robic, E. K. Luippold, S. R. McCartha, T. A. Walji, C. A. Walker, K. Saville, M. K. Abrams, A. R. Armstrong, W. Armstrong, R. J. Bailey, C. R. Barberi, L. R. Beck, A. L. Blaker, C. E. Blunden, J. P. Brand, E. J. Brock, D. W. Brooks, M. Brown, S. C. Butzler, E. M. Clark, N. B. Clark, A. A. Collins, R. J. Cotteleer, P. R. Cullimore, S. G. Dawson, C. T. Docking, S. L. Dorsett, G. A. Dougherty, K. A. Downey, A. P. Drake, E. K. Earl, T. G. Floyd, J. D. Forsyth, J. D. Foust, S. L. Franchi, J. F. Geary, C. K. Hanson, T. S. Harding, C. B. Harris, J. M. Heckman, H. L. Holderness, N. A. Howey, D. A. Jacobs, E. S. Jewell, M. Kaisler, E. A. Karaska, J. L. Kehoe, H. C. Koaches, J. Koehler, D. Koenig, A. J. Kujawski, J. E. Kus, J. A. Lammers, R. R. Leads, E. C. Leatherman, R. N. Lippert, G. S. Messenger, A. T. Morrow, V. Newcomb, H. J. Plasman, S. J. Potocny, M. K. Powers, R. M. Reem, J. P. Rennhack, K. R. Reynolds, L. A. Reynolds, D. K. Rhee, A. B. Rivard, A. J. Ronk, M. B. Rooney, L. S. Rubin, L. R. Salbert, R. K. Saluja, T. Schauder, A. R. Schneiter, R. W. Schulz, K. E. Smith, S. Spencer, B. R. Swanson, M. A. Tache, A. A. Tewilliager, A. K. Tilot, E. VanEck, M. M. Villerot, M. B. Vylonis, D. T. Watson, J. A. Wurzler, L. M. Wysocki, M. Yalamanchili, M. A. Zaborowicz, J. A. Emerson, C. Ortiz, F. J. Deuschle, L. A. DiLorenzo, K. L. Goeller, C. R. Macchi, S. E. Muller, B. D. Pasierb, J. E. Sable, J. M. Tucci, M. Tynon, D. A. Dunbar, L. H. Beken, A. C. Conturso, B. L. Danner, G. A. DeMichele, J. A. Gonzales, M. S. Hammond, C. V. Kelley, E. A. Kelly, D. Kulich, C. M. Mageeney, N. L. McCabe, A. M. Newman, L. A. Spaeder, R. A. Tumminello, D. Revie, J. M. Benson, M. C. Cristostomo, P. A. DaSilva, K. S. Harker, J. N. Jarrell, L. A. Jimenez, B. M. Katz, W. R. Kennedy, K. S. Kolibas, M. T. LeBlanc, T. T. Nguyen, D. S. Nicolas, M. D. Patao, S. M. Patao, B. J. Rupley, B. J. Sessions, J. A. Weaver, A. L. Goodman, E. L. Alvendia, S. M. Baldassari, A. S. Brown, I. O. Chase, M. Chen, S. Chiang, A. B. Cromwell, A. F. Custer, T. M. DiTommaso, J. El-Adaimi, N. C. Goscinski, R. A. Grove, N. Gutierrez, R. S. Harnoto, H. Hedeen, E. L. Hong, B. L. Hopkins, V. F. Huerta, C. Khoshabian, K. M. LaForge, C. T. Lee, B. M. Lewis, A. M. Lydon, B. J. Maniaci, R. D. Mitchell, E. V. Morlock, W. M. Morris, P. Naik, N. C. Olson, J. M. Osterloh, M. A. Perez, J. D. Presley, M. J. Randazzo, M. K. Regan, F. G. Rossi, M. A. Smith, E. A. Soliterman, C. J. Sparks, D. L. Tran, T. Wan, A. A. Welker, J. N. Wong, A. Sreenivasan, J. Youngblom, A. Adams, J. Alldredge, A. Bryant, D. Carranza, A. Cifelli, K. Coulson, C. Debow, N. Delacruz, C. Emerson, C. Farrar, D. Foret, E. Garibay, J. Gooch, M. Heslop, S. Kaur, A. Khan, V. Kim, T. Lamb, P. Lindbeck, G. Lucas, E. Macias, D. Martiniuc, L. Mayorga, J. Medina, N. Membreno, S. Messiah, L. Neufeld, S. F. Nguyen, Z. Nichols, G. Odisho, D. Peterson, L. Rodela, P. Rodriguez, V. Rodriguez, J. Ruiz, W. Sherrill, V. Silva, J. Sparks, G. Statton, A. Townsend, I. Valdez, M. Waters, K. Westphal, S. Winkler, J. Zumkehr, R. J. DeJong, A. J. Hoogewerf, C. M. Ackerman, I. O. Armistead, L. Baatenburg, M. J. Borr, L. K. Brouwer, B. J. Burkhart, K. T. Bushhouse, L. Cesko, T. Y. Y. Choi, H. Cohen, A. M. Damsteegt, J. M. Darusz, C. M. Dauphin, Y. P. Davis, E. J. Diekema, M. Drewry, M. E. M. Eisen, H. M. Faber, K. J. Faber, E. Feenstra, I. T. Felzer-Kim, B. L. Hammond, J. Hendriksma, M. R. Herrold, J. A. Hilbrands, E. J. Howell, S. A. Jelgerhuis, T. R. Jelsema, B. K. Johnson, K. K. Jones, A. Kim, R. D. Kooienga, E. E. Menyes, E. A. Nollet, B. E. Plescher, L. Rios, J. L. Rose, A. J. Schepers, G. Scott, J. R. Smith, A. M. Sterling, J. C. Tenney, C. Uitvlugt, R. E. Van Dyken, M. VanderVennen, S. Vue, N. P. Kokan, K. Agbley, S. K. Boham, D. Broomfield, K. Chapman, A. Dobbe, I. Dobbe, W. Harrington, M. N. Ibrahem, A. Kennedy, C. A. Koplinsky, C. Kubricky, D. Ladze et al. Drosophila Muller F Elements Maintain a Distinct Set of Genomic Properties Over 40 Million Years of Evolution. G3: Genes|Genomes|Genetics, 2015; DOI: 10.1534/g3.114.015966

I have never produced a paper with more than 300 people on it. But it can be done. A 2015 paper on Higgs Boston out of CERN, where the world’s largest particle accelerator is,  and has over 300 authors, some of whom are deceased. It is an intergenerational paper that not only builds on other’s shoulders but makes the case through the author list (rather than citations) that the work could not have been done without those people.  I’m sure infrastructure had to be stretched to accommodate the list. Another example is a paper on fruit flies that has over 1,000 authors, almost all of them undergraduates (image above). The students across several universities annotated DNA sequences by hand. And they got all the credit. When I get these presentations, this is one of the places where some people start to get those feelings. Those feelings of …. no… don’t wanna… how? … but…. nrrg.

What is being lost when we give more people credit? The answer is not nothing. That’s why people have those feelings. What is the value reproduced through a scarcity of credit? It’s an honest question. What is the value that is produced when some people get some kinds of value and some people don’t? [7]

The goal of incorporating equity and humility as guiding values in scientific work is that I want to change what science looks like. Because the scientific status quo is not ideal for all people.

“Democratizing science is one long meeting, so pay me for my time.”

Full disclosure: I’m being paid $500USD to give this keynote. Because I’m an expert and I’m being recognized as such. There are people in my lab who are covering for me right now, and I’m paying them at the union rate. They could be doing other things right now but instead, they’re probably cleaning little microfibers. I pay them for that. I pay everyone who does any scientific work for me, always. Citizen science or otherwise. And I almost always pay them with money.

If I had been offered the privilege of exposure to my work instead of money to do this keynote, I would have said no. I can get more exposure by writing “citizen science” on my naked chest on the Internet. I’m sure there would be follow up interviews. I don’t need exposure. I want money. I assume that other people also want money when they produce value for my goals, but I can’t assume what people want in a reciprocal relationship. In a reciprocal relationship, that’s for them to determine. So far, people tend to want money so that’s what I give them.

There’s a great book called Digital Dead End by Virginia Eubanks where she tries to democratize access to digital technology and gets a bunch of Black women from a YMCA involved. And one of them says, “Democracy is an endless meeting, so pay me for my time.” One of the main reasons I don’t often identify as a practitioner of citizen science is because a lot, though not all, citizen science projects are based on a sacrifice economy. In a sacrifice economy,  value continually accrues to people with more privilege (usually accredited scientists) and it’s usually drawn from folks with less privilege. Perhaps your citizen science projects gain value from retired white guys with castles and good pensions, but mine do not. A lot of the people that come talk to me who want diversity or inclusion in their projects tend to want to draw in people with less privilege to do free work for them. It tends to reproduce inequity, and it gets called diversity. So I pay people.

Photo of two women with plastic samples on the table in front of them
Max Liboiron and Jess Melvin at a community peer review meeting in Bauline, Newfoundland. 2017. Photo by Bojan Fürst.

There are concrete benefits from paying people that far outweigh the money I spend on them or the amount of time I spend writing grants to pay them. In the photo above, former CLEAR member Jess and I are at a community meeting in the fishing community that Jess if from. Whenever I work with a community or in a region, I always hire members of those communities into my lab as full, decision-making, lab members who have autonomy over their projects. They aren’t data grunts. They’re full collaborators.

One of the main forms of sample collection we do (and which is often identified as citizen science by others), is to collect fish guts from fishermen to investigate whether and how many plastics the fish are ingesting.[8] The protocol that I came up with to do this involved going to the wharves during the recreational fishery and we walk up to fisherman who are gutting their fish and say, “Hi. We’re scientists from Memorial University and we’re doing this project on plastics. Can we please have your fish guts? If you want to know whether your fish ingested plastics or not, you can nickname your fish Nemo or Fluffy or whatever and we’ll post its individual results on our website.” We get a couple of hundred guts this way.  It’s pretty successful.

Then Jess took over sample collection. She got a lot more guts from many more places. She can walk up to fishermen and say, “Hey, Bill.” That’s the first difference. Then, instead of asking for the guts, which fisherman have to cut out for us, she asked for carcasses when they were done, so she did the work of cutting out guts. And while she was at it, she also cut out the cod britches (which are fish ovaries) and cheeks (which are cheeks), which are delicacies but a pain to cut out of the fish. She would give those back to the fisherman as a thank you. And then she would cut out the guts for us, and also measure the length of the carcass so we had more data. She’d ask them if they wanted to name their fish, and then she could just tell them if there was plastic in them next time she saw them. We found is that even though everyone expressed interest in hearing whether their particular fish had ingested plastic, no one actually checked the website. So Jess’ method was much better. And then she could also talk to people about what it meant if their fish had ingested plastics. Because Jess was a full lab member, all these relationships were also lab relationships.

Community peer review

One of the questions I am often asked by my colleagues is how we get so many people at community meetings. There are three answers to that. First, because we’ve hired people from that community, some of those people are coming to see them, see what kind of fancy pants things they’ve gotten up to with science. Second, we’ve done enough of these meetings over time, where people have asked us questions and those questions have become our research questions, that they are waiting for our answers to their questions. They’re waiting for us, and we’re a bit late (since academic time is one of the slowest types of times!). And third, these meetings are not knowledge dissemination or science communication meetings. These meetings are what we call community peer review.

Community peer review is just like academic peer review, but what peer means is different. Otherwise, it is basically identical. We say, “this is what we did, these were our methods, these are our findings… What do you think? Can we publish this?” This is another place where feelings often happen for people: “what do you mean a community can say you can’t publish something?!”

I’ll tell you the legacy of this community peer review process, and it starts in anthropology. Some anthropologists started to notice, especially when they talked to Indigenous groups, that sometimes they would ask a question and the person would say, “now that’s a really hard question. I don’t know.” And the anthropology would rephrase it. “I don’t know.” “I don’t understand.” “You should go talk to so-and-so” (but so-and-so was out on the Land for five months or dead or no longer accessible). The anthropologists started noticing that these were forms of refusal. Whether you want to be part of it or not, the power dynamics are that accredited scientists and researchers always have more standing in formalized knowledge structures than people who aren’t recognized as such. Even if you think that’s wrong. So people were refusing without saying no. And it can be hard to notice when refusal is going on, so a formalized method called ethnographic refusal is one way to recognize and formalize that, so refusal is on the table as a research process and in protocol.[9] In community peer review, people rarely say, “no, you can’t publish this.” But they’ll say, “maybe you should go do this work over in Forteau,” which means, “go away to somewhere else, please.” So that’s what we watch for.

Communities have the right to self-determination and how they are represented in research. Full stop. As a researcher, I have stakes. They have rights. Stakes are not more important the rights– my tenure file is not more important than a community’s right to self-determination and how they’re represented. Even if that community is not an “impoverished” community or community of colour. Community peer review is one of the ways we do this, methodologically.

So far CLEAR has never ever been told not to publish, mostly it seems because we have been working with community members as paid collaborators the entire time, and so we have a somewhat decent idea of the harms and benefits as they are determined by communities. When I was a junior scholar I read a study about prisons and the social scientists went in and said, “We’re going go into this prison and do this great work and all the peer review and Institutional Review Boards say it’s good work.” And they got to the prisoner participants, and the prisoners said, “Honey, I’m glad you watch Orange Is The New Black, but you really don’t understand this place and what counts as benefit and harm here.” And the social scientists were humble enough to report that back in their findings. That’s an extreme but quintessential example of how as accredited scientists, we can’t always know and often don’t know, what counts as a harm or a benefit for communities. That’s for them to determine. It is up to us to believe them and carry those wishes out.

Being refused is a positive form of relationship building. It means that you are working with others to decide what the best routes of knowledge dissemination, storage, access, and use are. Academia is not always the best repository for all knowledge, which we know– if you’ve ever worked with sacred knowledge or personal knowledge or highly contextual knowledge that loses meaning when its categorized or quantified. Refusal means figuring out where knowledge should be. In our case, perhaps academia and open access publishing is not the best place to write that a community is contaminated. Perhaps better places are town halls and fishermen’s unions and local hospitals and Elders councils. Who will take the best care of the circulation of this knowledge? This is what community peer review is an opportunity for. It’s the most robust way to do accountability that we know of.

So when I publish my work (or even if I don’t get to publish my work!), I’ve gone through academic peer review and community peer review, which have different ideas of what is true and right good, and I’ve met both of them. Our work is solid from both angles. Whenever I see something published an academic journal about citizen science ethics or community partnerships written by academics and it hasn’t gone through community peer review, I can’t know if it’s valid because it hasn’t finished its peer review process. Because it hasn’t done that part of accountability. We can have really great ideas and they could be true and right and good by our standards but those are not universal standards.

Transforming relations through citizen science?

Citizen science is in a unique place to make multiple futures for science, and not just reproducing a status quo where certain types of people always get paid and certain people don’t, certain people always get keynotes and certain people don’t, certain people have knowledge and other people need education. These things don’t happen because someone is being a jerk– they get reproduced because it is easier, because that’s how infrastructure is set up, because that’s how we did it last time, because that’s the norm, because it just makes more sense. This is how power works.

Citizen science is in a good place to do science otherwise. Rather than being the never-quite-a-pomegranate to Western science’s pomegranate stick, where we are always trying to make citizen science “as good” as “regular” science, let’s get our cod on (yes, this metaphor may have raged out of control). But let’s not make citizen science the lesser sibling of accredited science, always trying to catch up. It can do its own things on its own terms– citizen science has a greater capacity (though it is not guaranteed) to do accountability better than “regular” science. It has the capacity to do diversity better, to do humility better, to do equity better, because “regular” science isn’t often handling that very well. Its infrastructure is cemented in more that citizen science– we have more flexibility here. So let’s use it as an opportunity to be more equitable, more humble, but also more of other values you may have– more collective, more community-oriented, more just, more accessible.[10]

Thank you very much for listening and for doing good work.[11]

Screen Shot 2019-03-18 at 10.44.12 AM.png
Notes from this keynote by Lila Higgins @lilamayhiggins

This keynote was prepared in conversations with CLEAR members, Shannon Dosemagen, Ben Pauli, Nick Shapiro, and Rick Chavolla. Thank you. 

Other helpful bits and bobs:



  1. One of the frustrations that often comes up when I talk with other environmental justice folks is how people who want to do more inclusive and diverse work call us to ask for our contacts, our methods, our platforms, our materials. We do a lot of work with others to grow those things up, and they are collective things that should not be uprooted and placed elsewhere. It’s even more frustrating when others use our collective resources to get their own grants, skipping past us and our communities in the process.  I probably get one email a week where someone asks me to share “my list” of contacts in Indigenous communities. There is no list.
  2. For more on the politics of scientific measurement, see:
    Pine, Kathleen. H., & Liboiron, Max. (2015). The politics of measurement and action. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems(pp. 3147-3156). ACM.
    boyd, dana, & Crawford, kate. (2012). Critical questions for big data. Information, Communication & Society 15 (5), 662–79.
    Desrosières, Alain. (2002). The Politics of Large Numbers: A History of Statistical Reasoning. Cambridge: Harvard University Press.
    Liboiron, Max. (2015). “Disaster data, data activism: grassroots Responses to Representing Hurricane Sandy,” in Extreme Weather and Global Media, Eds. Julia Leyda and Diane Negra, Routledge.
    Porter, Theodore. (1996). Trust in Numbers: The Pursuit of Objectivity in Science and Public Life. Princeton: Princeton University Press.
  3. I am indebted to the work of Michelle Murphy in articulating what does and does not get reproduced as a form of power, specifically her work here: Murphy, Michelle. (2017). The economization of life. Duke University Press.
  4. We’ve published a paper about this– it’s fairly esoteric and technical in terms of the content, but its role is to have a peer-reviewed precedent that we can point at to say, “see, the universal protocol demonstrably didn’t work here.”
    McWilliams, Matt, Liboiron, Max, & Wiersma, Yolanda. (2018). Rocky shoreline protocols miss microplastics in marine debris surveys (Fogo Island, Newfoundland and Labrador). Marine pollution bulletin129(2), 480-486.
  5. If you work in a university, making open science hardware can be tricky, depending on your university’s intellectual property (IP) policy. If you’re interested in how we’ve struggled with it, the tales are chronicled in this academic paper:
    Liboiron, Max. (2017). Compromised agency: The case of BabyLegs. Engaging Science, Technology, and Society3, 499-527.
    Today, Memorial University has a new creator-owned IP policy that makes future inventions much, much easier to keep open.
  6. For the whole process, see this open access paper: Liboiron, Max, Justine Ammendolia, Katharine Winsor, Alex Zahara, Hillary Bradshaw, Jessica Melvin, Charles Mather, Natalya Dawe, Emily Wells, France Liboiron, Bojan Fürst, Coco Coyle, Jacquelyn Saturno, Melissa Novachefski, Sam Westcott, Grandmother Liboiron. (2017). “Equity in Author Order: A Feminist Laboratory’s Approach.” Catalyst: Feminism, Theory, Technoscience 3(2): 1-17.
  7. This doesn’t mean that author credit on an academic paper is always the best way to credit people. Often money is good, too, which I’ll talk about in a moment. But even if other forms of credit are valuable to those who create knowledge, it doesn’t mean we also give them author credit. IMHO.
  8. This is the protocol for gut collection and here are a couple of the papers we’ve produced that use the described protocol:
    Liboiron, Max, Melvin, Jess, Richárd, Natalie, Saturno, Jackie, Ammendolia, Justine, Charron, Louis, & Mather, Charles. (2018). Low incidence of plastic ingestion among three fish species significant for human consumption on the island of Newfoundland, CanadaMarine Pollution Bulletin, 141: 224-248.
    Melvin, Jess. (2017). Plastic ingestion in Atlantic cod (Gadus morhua) on the east coast of Newfoundland, Canada: results from a citizen science monitoring project, with policy recommendations for long-term monitoring (Master’s thesis).
    Liboiron, Max, Liboiron, France, Wells, Emily, Richard, Natasha, Zahara, Alex, Mather, Charles, Bradshaw, Hillary, & Murichi, Judyannet. (2016). “Low plastic ingestion rate in Atlantic Cod (Gadus morhua) from Newfoundland destined for human consumption collected through citizen science methods.”Marine Pollution Bulletin.
  9. We have written a paper on the how-to of ethnographic refusal. It’s been rejected from all the scientific journals we’ve submitted to, and we’d rather not submit to a social science journal, so in the meantime it sits as an un-reviewed pre-print here: Liboiron, M.; Zahara, A.; Schoot, I. Community Peer Review: A Method to Bring Consent and Self-Determination into the Sciences. Preprints 2018, 2018060104 (doi: 10.20944/preprints201806.0104.v1).
  10. Most of my students come into my classes with accessibility as their paramount value. But often accessibility can mimic equality– by bringing people into spaces that aren’t built for them, which can compound inequity, or can universalize something that is actually particular (like Western science). If accessibility is your jam, see Kelly Frisch’s excellent work on the topic to help nuance your starting point:
    Fritsch, Kelly. (2016). Accessible. In K. Fritsch, C. O’Connor & AK Thompson (Eds.), Keywords for Radicals: The Contested Vocabulary of Late-Capitalist Struggle: 23-28. Chico, CA: AK Press.
    Fritsch, Kelly. (2016). Cripping Concepts: AccessibilityReview of Disability Studies: An International Journal 12(4): 1-4.
  11. The last question of the panel was about what to do if you, as a researcher, already had asked your research question and couldn’t start from the beginning, or what to do if you were constrained by a structure that wasn’t yours to change (common for students, technicians, employees of a certain kind)?
    First, there is no space that is pure and blank from which to do social change work. There is no terra nullius. If we have identified a system that needs to change, there is generally no outside of that system from which to change it. Some activists call this “compromise,” where you will inevitably reproduce some part of the system you are trying to change in trying to change it (Fortun 2009, Hale 2006). This is a condition of doing change work, not a shortcoming. The trick is to be aware of what you are reproducing, and what you will not reproduce. For example, in CLEAR’s author work, we reproduce the focus on individuals as creators of knowledge by using people’s individual names in our publication rather than publishing as “CLEAR.” We’ve decided this as a group, since CLEAR as so many junior, women, Indigenous, and scholars of colour and we want to ensure they can trade in the currency of the realm if they choose. Publishing as CLEAR would not allow that, even though it is a “better” representation of our value of humility.
    Second, we all have jurisdictions where we can make decisions. If you’re the Vice President of Research at a university, that’s a pretty big jurisdiction and a lot of change can happen. If you’re a student working on an advisor’s project, your jurisdiction is much smaller, but it still exists– how do you want to treat other collaborators, technicians, and samples? You can always aim to scale up from your jurisdiction, which is how a lot of social change happens, but you always have some place to start as well.
    Fortun, Kim. (2009). Advocacy after Bhopal: Environmentalism, disaster, new global orders. University of Chicago Press.
    Hale, Charles R. (2006). Activist research v. cultural critique: Indigenous land rights and the contradictions of politically engaged anthropologyCultural anthropology21(1), 96-120.