Researchers say we need not fear a new generation of thinking machines.
By Cory Schachtel | September 1, 2018
Ever since computers were clunky, whirring machines that took up entire floors, humans have marvelled at their potential, envisioning all the ways they could help or even be like us. Tapping into our own dark nature, science fiction tends to reach what creepily feels like the natural conclusion of obscenely smart machines with human dispositions; our demise. There’s no robot apocalypse on the horizon, but the revolution is well under way. Artificial intelligence isn’t coming. It’s here.
No, really, it’s here – in Edmonton, at the University of Alberta. It’s been here, in some form, since the ’60s, and it’s poised to lead the city, and world, in to the future.
On April 1, 1964, U of A built Canada’s first Department of Computing Science around five academics, a small support staff and the LGP-30, an 800-pound, deep freeze-shaped digital computer. It was the university’s computational crown jewel, coveted across campus for its 32-bit capacity and 0.024 second multiplication speed – a mere million or so times smaller and slower than today’s standard laptop. Over half a century later, the department now boasts nearly 50 academics and about the same number of non-academic staff, overseeing 400-plus graduate students and undergrads seeking their doctorates and degrees.
Jonathan Schaeffer started as a computer scientist in January 1984 and, even in those early days, the department showed promise. It finished near the top in both chess and checkers world championships throughout the ’90s, with Schaeffer’s program, Chinook, eventually solving checkers in 2007. Schaeffer eventually became the U of A’s dean of science, but stepped down in August, citing differences with university leadership.
In 2001, Schaeffer and three fellow professors submitted an AI Machine Learning proposal into a province-wide funding competition by the Alberta government. Their winning bid birthed the Alberta Machine Intelligence Institute (Amii), and its four founders still guide it today. “Most research programs in Canada are short-term,” Schaeffer says. “They’re usually a one-year contract, sometimes three, but five is like wow. Our centre is unique in that it’s been funded continuously for 15 years.”
While funding is the only recognition that really matters to ensure longevity, Amii has received high praise internationally. Based on Amii’s major journal and conference publications, the global Computer Science Rankings has found that over the previous 25 years, U of A has been one of the most productive research groups in the world (despite being smaller), second only to Carnegie Mellon University in Pittsburgh, and ahead of places like Stanford and MIT. “We knew we were good, but not how good, until that came out,” Schaeffer says. “One of my colleagues said, ‘How the hell did you guys get such a good AI group in the sub-Arctic?’ We aren’t a household name, unless you’re in computing science and AI. Then you know exactly where and who we are, because there’s an outstanding group of people here.”
Until recently, that group experienced a ton of turnover, lured away by more prestigious cities and high paying companies around the world. “Many of my former students work for Google in California, or off in Europe, not here giving back to taxpayers who helped fund their education,” Schaeffer says. “But I think there’s a new initiative to diversify the Edmonton economy, by creating a sector that is globally leading edge in one of the hottest research areas in the world. In a short time, we’ll be an importer of talent.” Fortunately, the biggest talent has been here a while, and he has no intention of leaving.
With his ponytail, beard and apparent aversion to ties, Rich Sutton looks more like a hippy than a computer scientist – you’d guess he’d be more comfortable in a garden than a lab. But, after a sentence or two, his intellect is obvious, and would be intimidating if his tone wasn’t so measured and kind. Born in Ohio and raised in a Chicago suburb, he studied at Stanford and the University of Massachusetts from 1978 to 1984, then researched and worked on machine learning into the millennium. In 2003, he started teaching Computing Science at U of A, where he played a primary role in progressing machine learning and making AI academics aware of exactly how they got such a “good group in the sub-Arctic.” Influenced by his psychology degree, Sutton pondered the problem of how to program a computer to think and learn. “A computer doing what it’s told couldn’t possibly be thinking,” he recalls. “Yet at the same time, what else could it be? It would just be a complicated program and complicated way of operating that would be a mind.” This posed the profound question that’s guided his career: Can we reproduce how our minds operate in a machine? “It was that mystery, right from the beginning – the impossibility, yet inevitability, of an electronic brain.”
Sutton’s insight sprung not from something he discovered, but from something fellow researchers had overlooked, based on a theory developed by machine learning scientist Dr. Harry Klopf, who said something was missing in the field of systemic learning: A system that wants something. “Back then, all the machines were doing pattern recognition, learning from examples, or mimicking a training set,” Sutton says. “If they got a bad outcome, they didn’t try something different.” Klopf called it a “hedonistic” learning system, which adapts its behaviour to maximize a special signal from its environment. Today, it’s called reinforcement learning.
Compared to other machine learning types, reinforcement learning lets the algorithm off the leash. Whereas supervised learning provides algorithms with an answer key of correct and incorrect actions, a reinforcement learning algorithm trains through trial-and-error, failing its way to success with no feedback beyond “this behaviour was (or was not) desirable.” Its most publicized success came from beating a world champion in the ancient Chinese game, Go, which, due to its deceiving complexity and possible per-mutations, was the last classic board game to fall. The Go-playing program, AlphaGo, was developed at the U of A, and, after analyzing millions of moves from human Go experts, it became the first non-human champion in Go‘s 2,500-year history. Not only did its self-taught successor, AlphaGo Zero, beat it 100 games straight, it did so making moves no previous player had considered.
Beating humans at our own games (and beating the machines that beat humans) – to a degree that makes it pointless for programs to play – is a big step for researchers, but comes across like a neat trick to the public. Most people’s reaction to hearing humans lost again is something like “that’s cool, but when do I get my self-driving car?” They’re coming – much sooner than killer robots. U of A has already collaborated with Mitsubishi Electric. And Twitter. IBM has been a partner for 13 years. Then there’s Google’s DeepMind, which, after plucking the university’s best and brightest for years, returned the favour by opening its first international AI research office in Edmonton in 2017, based largely on Sutton’s leadership (he literally wrote the book on reinforcement learning).
These global companies are paying close attention to our city entirely because our researchers are the best in this field – that’s consensus, not arrogance. So why aren’t we the next Silicon Valley? Part of it is because AI is so relatively new, and reinforcement learning is newer still. There’s also the nature of computer scientists, especially those influenced by Sutton’s unassuming demeanour; their work alone drives and fulfills them – they don’t care if we don’t care. But it’s also due to a lack of vision from the business community, which if it’s not careful, could miss a momentous economic chance.
“We have this crown jewel with Amii, but it’s more the tip of the iceberg.” Cory Janssen knows business and computers. He co-founded and sold financial news website Investopedia, and now runs a private investment company and AltaML, a developer of machine learning software. He’s smart, determined and borderline fearful of Edmonton business blowing its shot. “Amii has the Suttons of the world, among others, but in computer science, engineering and across the [U of A] , they’re attracting talent from around the globe. They come here because of the academic research, but then they leave. As an entrepreneur, I see opportunity. I want the next DeepMind to be home grown.”
Janssen’s on the hunt for machine learning researchers and developers, but he’s also trying to rally fellow businesspeople to collectively grab the AI baton and make Edmonton known outside academia. He and two partners started Edmonton.AI with the mission of creating 100 AI and machine learning companies and projects by 2020. Their strategy is to create a central location to hold events and create space for industry, academics and entrepreneurs to connect, and to tell stories promoting AI and Edmonton’s expertise, something computers thankfully won’t be doing soon (I asked).
To be clear, the business community lacks knowledge, not enthusiasm. Janssen’s team has already talked with PCL, ATB and Servus (all university supporters), and Amii has almost 200 companies knocking on its door. This April, AccelerateAB hosted 450 investors and entrepreneurs at the Shaw to listen to experts, including Sutton, explain how AI and reinforcement learning can help their business – a one-time version of what Edmonton.AI hopes to make a permanent fixture. “Everyone is interested, but they don’t know the first step,” Janssen says. “This is like ’94, when there first was this internet thing. It seemed cool, but this was before Google, before browsers. We’re at that stage now, and the change will be as profound as it was then.”
The change is starting to take shape, but not just in business. In 2017, U of A and IBM scientists showed that, using machine learning to examine brain scans, they could predict the occurrence and severity of schizophrenia with 74 per cent accuracy, before any symptoms. Machine learning has also helped the University develop diagnostic and predictive tests for Alzheimer’s, cancer and diabetes.
Like beating board games, these early success stories show AI’s potential, with the practical difference of saving lives and doctors’ time, something Rashid Bux knows well. His company, BioMark Diagnostics Inc., creates technology that more accurately predicts and diagnoses cancers. When he sought out the next big thing to improve his company’s diagnostics, one field – and one institute – kept coming up. “We speak with oncologists all the time,” he says, “and we always ask: how robust is this data? Are there ways to improve the model?” In addition to molecular biomarkers, they use various imaging and lifestyle data to give doctors a 360 view of the type and stage of cancer the patients they studied have. It’s a wealth of information too vast and complex for human minds to decipher, but after six weeks, Amii blew Bux away. “It feels as if we’ve been working together for years,” he says. “We do a study, generate numbers, and give the data sets to Amii, who run their machine learning program and look for patterns. Then we run the numbers again to see if this pattern is consistent. That tells us what we should emphasize and look at going forward.”
As a foreign-born and educated entrepreneur who is opening an office in Edmonton this fall, Bux is an early example of exactly what the U of A, Amii and any civic-minded citizen should hope for: An influx of intelligence and business to diversify our economy and make everyone’s lives better. “The timing is right in Edmonton,” Bux says, “and there’s that drive in Alberta, to become a less oil-dependent, more knowledge-based province. It has to take leadership from the small group at Amii and scale it up, but the field is moving so fast, you sort of have to be there.”
Saving lives, streamlining our economy – this isn’t what most science fiction has told us AI will do. And while bad actors will always use new technology nefariously, researchers scoff at the thought of sentient AI overlords – Sutton even stands alone from some of his colleagues on the topic. He thinks people’s fear of general AI is simply our fear of “the other,” and compares its arrival to our ancestors meeting new humans for the first time. “The misconception is to think of AI as an alien thing, but it’s a very human-centric thing. All the apps are about making humans more powerful, and it’s already helping us figure out how our minds work,” he explains. “There shouldn’t even be a distinction, it should just be one topic: The science of mind, whether artificial or natural. It will eventually all converge.”