Why Human-Centred Approaches Lead to Better Algorithm Design

Author:Murphy  |  View: 27890  |  Time: 2025-03-22 22:08:24

Algorithms often evoke fearful thoughts of cold, hard mathematical formulas beyond the minds of many.

This is the approach taught in many computer science courses and textbooks – it was what I learned when I was a Comp Sci student in the 1990s.

Conceptually, this approach works well – for searching, sorting, calculating, organizing, etc. There are categories of well-established algorithms to get the job done.

But what about for modelling an environment where qualitative data is predominant? For example, when modelling data that deals with informal approximations.

Traditionally, this is how humans solve problems in the face of many variables. Here, the decisions made are far more complex and intertwined with the fabric of society than many assume.

A massive failure while working on my PhD made me realize this, and it forever transformed my approach to algorithmic design.

Traditional Algorithmic Thinking

I went to school to become a computer scientist. This was back in the 1990s when most in the field were purists, taking the view that algorithm design was purely a mathematics endeavour.

The main purpose of algorithm design being to improve efficiency and to optimize what ever needed to be better. Audio algorithms, storage algorithms, compression algorithms, speed algorithms.

My first job out of college as a programmer/analyst was to create an employee scheduling algorithm for industrial saw mills. All of the data was quantitative and was easily plugged in as input to our algorithm.

And presto! The weekly schedule was generated.

A few years later, back at school working on a PhD, I used this style of approach to algorithm design in my own research, and it was a disaster.

But maybe not in the way you think.

Let me explain.

Quantifying Qualitative Thinking

As mentioned, my undergraduate computer science degree followed the traditional approach of mathematics intertwined with computer programming and problem solving, culminating in the ability to design algorithms to solve problems.

When I started working on my PhD, having grown up in a remote rural community in northern Canada, I was very curious about the possibilities of an intersection between indigenous knowledge and predictive modelling.

I had a great idea!

Knowing about algorithmic design and knowing that remote rural communities at the time used traditional qualitative thinking, I could design an algorithm that quantified traditional qualitative thinking!

All that needed to be done was to convert non-numeric identifiers to their numeric values. Easy, right?

And some similar research to this had been done already – for example with long-line fisheries folks in Grenada (Berkes and Grant, 2007).

Fishing boats in Grenada (Image courtesy of 12019 on Pixabay)

Traditionally, fisher people like the folks in Grenada would determine where the fish were and what they were eating based on approximations – about the weather, the water depth, the sea temperature, time of year, and so on.

In every day practice, these fisher folks use approximation values like many, some, few. Additionally, in their decision making, they would assign one or more of these variables a weight (also informally and non-numeric).

Dall-E image: depicting all the variable to account for when deciding where to fish.

Depending on the situation, a particular variable might have more weight than another variable. For example, water depth might be more important than the current weather for a particular species.

So by quantifying these approximations using fuzzy logic, an "expert system" (a type of algorithm) was designed by Berkes and Grant that provided predictive modelling.

Awesome, right?

My algorithmic design was very similar in that I wanted to quantify local knowledge of salmon migrations and habits. These values could then be plugged into my spiffy AI model to generate predictive outcomes.

I spent a long period of time (more than a year) developing this model on my own. I even had a series of articles published on my progress.

Sockeye Salmon preparing to spawn (Photo by Timon Cornelissen: pexels.com)

Once ready, I proposed the model to a few communities. They were not very receptive – in fact they were unanimously opposed to it, and for more than one reason.

Two important reasons explained to me were that this model would take away from:

  1. The interaction between elders and the younger community members, diminishing the passing down of knowledge.
  2. Learning on the land.

Additionally, it put this type of knowledge into the public domain – something that many Indigenous folks are very wary of (and rightly so).

This was a few big steps backward rather than the step forward that I thought I was taking.

So What Did I Miss?

Each society is different – the term society defining what makes this an ordered, and unique, group of people.

Hegemonic societies often forget that they are not the only society that is present.

To understand how algorithms make a difference in the daily life of a group of folks is to look at how these folks, as a society, engage with, and are conditioned by, their own algorithmic systems.

To assume that they don't have one is a common mistake.

As Lenglet (2011) puts it, it is important to undertake an ethnography on how such systems shape the world in which a society lives in.

How does this group of people think? How do they solve problems? What is unique to their day to day existence?

I did not perform this crucial step in my research.

Add to this, a simpler even more important question that I neglected to ask – do they want any help from me?

I was trained as a traditional data scientist. I approached my research based on how I was taught – to use mathematical formulas solve problems. I knew little to nothing of the human/sociological components of algorithmic design.

It turned out that for me, a PhD actually necessitated some philosophical thinking.

Which is the point, of course, but not one I was very prepared for.

The Bottom Line

Traditional algorithmic approaches often fail to consider the broader implications of technology effects (both positive AND negative) on communities and cultural practices.

In my own PhD research, I was a casualty of this narrow-minded approach.

And at the core, humans think qualitatively, and many folks are still comfortable with this way of thinking.

From a human-centred perspective, it is important to understand that one algorithm does not necessarily fit all.

Different forms of societal structure and thinking may demand different design considerations.

Or perhaps even no design at all.

Thank you for reading.

References

Grant, S., & Berkes, F. (2007). Fisher knowledge as expert system: A case from the longline fishery of Grenada, the Eastern Caribbean. Fisheries Research, 84(2), 162–170.

Lenglet, M. (2011). Conflicting codes and codings: How algorithmic trading is reshaping financial regulation. Theory, Culture & Society, 28(6), 44–66.


If this type of story is right up your alley, and you want to support me as a writer, subscribe to my Substack.

Subscribe to Data at Depth

On Substack, I publish a bi-weekly newsletter and articles that you won't find on the other platforms where I create content.

Tags: Algorithms Data Analysis Data For Change Data Science Human Centered Design

Comment