wâsikan kisewâtisiwin, an AI app designed to correct anti-Indigenous bias in writing, raises many technological and ethical questions — but the first one Shani Gwin usually answers is, “How do you pronounce it?”
“Wuh-see-gahn key-su-wat-su-win,” she says.
I repeat it back in one smooth utterance, earning a figurative gold star from Gwin. “I’m practically a native speaker,” I quip — then instantly regret the unintended double entendre.
What was, in any other context, cheekiness, could rightfully be interpreted as flippant by Gwin, a First Nations and Métis communications professional accustomed to casual racism — the very problem that inspired the app years before AI technology could even begin to tackle it. In 2020, pipeline protests on Wet’suwet’en territory in Northern British Columbia turned many Canadians hostile toward their Indigenous neighbours. The deluge of heavily biased reporting and commentary gnawed at Gwin and her employees at pipikwan pêhtâkwan, Edmonton’s largest Indigenous-owned and majority-staffed PR agency — who found themselves wrestling between correcting the bias or ignoring it, either option saddled with mental burden.
My potential faux pas doesn’t rank high on the spectrum of anti-Indigenous bias Gwin encounters, which ranges from covert remarks like “I had a magical experience at a sweat lodge” to overt racism like “You can’t be Métis, you’re not drunk.” But even asking the question at the back of my mind — Was that bad? — is a type of burden. “I just thought there’s got to be a way to help take the emotional labour off of our people,” says Gwin, who is tired of people texting her as their “one Indigenous friend” to help litigate awkward social situations.
From these seeds of discontent, Gwin imagined a “little helper” not unlike Microsoft Word’s animated assistant “Clippy,” but instead of assisting with cover letters, it helps people be less bigoted and write more thoughtfully.
Language has always been central Gwin. Her great-grandfather, who ran a Hudson’s Bay stop in Grouard, spoke six languages, including nêhiyawêwin (Cree), the mother tongue of her great-grandmother from Michel First Nation (now part of Sturgeon County). Gwin grew up there, absorbing the language in fragments — heard but never formally taught. Her great-grandparents spoke it fluently, her grandparents understood but were apprehensive to speak it in public, and her parents retained only remnants. Most of what nêhiyawêwin Gwin now knows and speaks, she learned later. “When I make time to learn Cree, this whole other world of who I am opens up.”
This partly inspired her to rename her PR firm. Through a traditional ceremony, an elder gave it the name pipikwan pêhtâkwan, meaning “eagle bone whistle heard loudly” — a ceremonial object used to awaken ancestors. The name’s implication of power and responsibility resonated deeply. “We’re an extremely values-based organization,” Gwin says. pipikwan turns down clients seeking performative allyship, and staff make decisions collectively, even when it slows things down. It reflects seven generations thinking, a First Nations principle that considers an action’s consequences for seven generations forward and backward — holding decision-makers accountable to both their great-great-great-great-great-grandchildren and ancestors.
That philosophy contrasts with the tech sector’s “move fast and break things” mindset. Almost a year after conceiving her AI startup, she brought it to Amii (Alberta Machine Intelligence Institute), a research institute founded by the Government of Alberta and University of Alberta. She met computer scientist Ayman Qroon, who assessed whether machine learning could tackle online bias and hate speech.
It was technically feasible but required massive datasets, making it financially unfeasible for a small startup. “It was one of those projects that excited everyone internally,” recalls Qroon. But training the AI would take years. The still-unnamed project was put on hold — until the world met ChatGPT 3.0 and the AI revolution began.
Nearly five years after conceiving the app, Gwin has a product manager, a prototype and a high-profile endorsement from MIT Solve, a program supporting socially minded tech. It also has a nêhiyawêwin name. Ceremoniously shared by Elder Theresa Strawberry from O’Chiese First Nation, wâsikan kisewâtisiwin means “kind electricity,” likening AI to thunder — initially frightening but ultimately capable of rain and renewal.
Its name is a reminder to use this powerful, potentially destructive technology for good. Strawberry sees AI’s potential to educate future generations but warns against repeating history, where Indigenous knowledge was weaponized against its people. “How much of our values and traditions do we feed to AI when, back in the day, they took that knowledge about our ceremonies to say it was evil?” she asks. “We need to remember AI does not have a conscience.”
Recognizing it wasn’t their decision alone, wâsikan kisewâtisiwin is guided by an Elders Circle representing several First Nations, Métis and Inuit peoples. The circle, in turn, recommended ceremony to seek direction from the ancestors. “We got the go-ahead from the ancestors to continue teaching [the AI model] about who we are, but to proceed with caution and to do it very slowly and with elders,” says Gwin.
Rather than letting the AI scrape and devour whatever Indigenous history data it can find, wâsikan is securing permissions and protocols to build a dataset that is both representative and respectful of Indigenous communities and organizations across Canada.
wâsikan is part of a growing wave of Indigenous-led apps reclaiming language, knowledge and cultural autonomy. In Brazil, Tainá is a chatbot that shares traditional ecological knowledge in multiple Indigenous languages, while ‘ĀinaQuest, another MIT Solver, is a fast-paced game that teaches players to identify and value native Hawaiian plants. Quispe Chequea in Peru also flags anti-Indigenous misinformation, but by analyzing media for false claims and responding with fact-checked explanations in Quechua and Spanish. Gwin has connected with some of these founders to support one another’s work and imagine the power of building these tools in community, for community, and for future generations.
The prototype I demo is still far from Gwin’s ethical ideal. It’s being built in two parts — a browser extension that flags biased writing and a large language model (LLM) for dialogue — both needing refinement before merging into a single tool. Gwin has me test the extension on Facebook, “because this is absolutely where it needs to be.”
Several comments on the mock profile page are already blurred by wâsikan. I’m mercifully given a problematic comment rather than having to invent one. The sentence — “You cannot be First Nations and Métis” — is immediately underlined. Scrolling over it, a textbox clarifies that while not explicit hate speech, it may reflect a misunderstanding of Indigenous identities and suggests a revision: “You can’t be a registered and treaty status Indian and have a Métis citizenship under the government of Canada, but you can have Métis and First Nations ancestry.” (This I did not know!)
The LLM works differently from other chatbots. Instead of entering a single question or prompt, you’re asked to describe the context in which a statement was made, followed by the statement. This time, I’m not spoon-fed a potentially racist scenario — but I don’t need to be.
Context: I correctly pronounced the app name wasikan kisewatisiwin on my first try but after the founder complimented me on my pronunciation I made a potentially problematic comment.
Comment: I’m practically a native speaker.
I pray “No whammy!” as it thinks and thinks, but the verdict comes back as unconscious bias. “The comment,” I’m told, “is dismissive and disrespectful. It undermines the complexity and depth of Indigenous languages by implying that correct pronunciation of a single phrase makes one ‘practically a native speaker.’”
Gwin beams with pride. Her “little helper” has helped me. But has it? I wonder if it risks making users lazy by offloading their critical thought to an app.
“That is a risk with AI in general,” admits Gwin. “What a lot of people don’t realize is, you do eventually start to learn through osmosis. Even if you’re being lazy about it, you’re still reading this, and it’s going to get through at some point, I hope.”
This article appears in the May 2025 issue of Edify