The night before any interview, sparks of doubt flit through my mind: Did I set it up for tomorrow, or the next day by accident? Did I mix up a.m. with p.m.? And did I get the location right?
Thankfully, Google’s artificial intelligence has my back. Using fingerprint recognition software, I open my phone. I look briefly at the email app before tapping through to my calendar, which has been auto-populated with a meeting based on the emails I’d sent to set it up.
Yes, I have put all trust in our AI, high-tech overlords. While I can’t hire an assistant to schedule my meetings or prepare my interviews, AI has kept me on track, filling my calendar, guiding me through traffic and recommending useful tools and online content. Most importantly, it’s taken over tasks I likely would have otherwise bungled at one point or another.
Cam Linke, CEO of the Edmonton-based Artificial Machine Intelligence Institute (Amii), says AI affects more parts of our daily routines than most may realize — from which ad or TikTok is next shown on your feed, to the development of a new pharmaceutical treatment or food technology.
“There’s things that are kind of obvious and are very consumer, and a lot of ways behind the scenes that AI is being used to help our every-day lives that we don’t really see,” he says.
“It’s such an exciting field with so many positive ways it can have an impact on the world. Look at the areas we have a specific focus around: bio health, ag, energy — these are all really big areas, with really big, important problems that the world is facing.”
AI has never been more a part of our daily lives, nor more easily accessible.
Take, for instance, the explosive growth of technologies like ChatGPT, an AI tool that allows users to find answers to homework, write software or even tell jokes, all based on simple, natural-language prompts and responses. Within a week after its launch in November 2022, one million users had used the technology. Two months later, that number grew to over 100 million.
As AI like ChatGPT advances, it moves out of computer labs and research institutes, into the hands of everyday people. Users no longer need an advanced degree in computer science to use and take advantage of the convenience and power AI offers.
But at the heart of most AI technologies is vast amounts of data — much of it personal and sensitive information collected and processed by the apps and tools we use daily. Therein lies the problem: AI has the power to do great good, or just the opposite.
“AI is an incredible tool that has this ability to be a very large lever for a lot of positive impact,” Linke says. “But as part of any great technology or great tool, there’s also the possibility for harm in there.”
This understanding feeds into what Linke calls Amii’s mission: Artificial intelligence for good and for all. Amii’s team of experts are working hard to keep AI programs on the right side of history.
“We create programs and products that are able to help AI be realized in its best and most positive way in the world,” he says.
Amii began in 2002 as a joint effort between the Government of Alberta and the University of Alberta to invest in the future of artificial intelligence and machine learning in the province. Over the last 20 years, it has become established as a leader in these fields, attracting top researchers and computer scientists from around the world.
One of those researchers, Nidhi Hegde, spent time working in labs in Canada and Europe before joining Amii as a Fellow and Canada CIFAR AI Chair. Today, her research focuses on privacy and ethics in AI and machine learning, a field she notes is “a little bit hard to define.”
“Different people will describe it in different ways,” she says. “The way I define it is putting emphasis or considering a little more closely the impacts of AI.”
Her research focus, she explains, has to do with how machine learning models take given data and use it to make predictions about something they haven’t seen yet. The decisions made by those machines may have unintended impacts — for instance, the models may infer some data or information about a user that they should not have, or there could be a bias or fairness issue in the way decisions are made.
“Ethical AI is really about not only just being aware of these issues, but also, actively trying to mitigate these adverse effects,” she says. “In essence, it’s about considering the implications of AI products and services, machine learning algorithms and models.”
As an example, Hegde describes the use of AI to process a loan or mortgage application at the bank. Even when the data entered into the system is strictly numerical and should be unbiased, there have been anecdotal instances of algorithms learning patterns from the data sets. That means a loan may be denied without just cause for people of certain races, ages or genders.
“[The algorithm] may learn patterns linking attributes that are not important for the actual scenario — like your race and gender shouldn’t matter when it comes to your financial health — so that’s a case of fairness or bias,” she says.
Now, along with students at the University of Alberta and colleagues at Amii, Hegde works with mathematical models that she hopes will one day help overcome challenges of bias in machine learning, and ultimately train AI to be fairer in the long term.
“We have this feedback loop of training a model, deploying it, seeing that there’s some change in the population, and then retraining and redeploying and toying with it,” she says. Instead, she says, “we have to come up with new mechanisms, new mathematical frameworks, that help us create algorithms that are fair.”
Hegde says it can sometimes be hard to stay on top of the latest advancements. But working at Amii, she’s able to tap into the expertise and growing knowledge of colleagues in real time. Teamwork, she says, facilitates the diverse perspectives that are one of the most important parts of her work.
“I just have to go down the hallway and find a world expert on these topics that I can talk to about the newest language model,” she says.
As Amii continues to invest in the science of AI and drive the field forward, Linke sees a promising future ahead. They’ve already laid the groundwork for AI governance with the Digital Governance Council (formerly the CIO Strategy Council), and have invested in Indigenous leadership in AI faculty positions at the University of Alberta.
And they’re just getting started.
“We think that we can continue to lead the field and continue to be pioneers in an ethical or responsible way,” Linke says.
This article appears in the May 2023 issue of Edify