Geoffrey Hinton(full name:Geoffrey Everest Hinton) CC FRS FRSC is an English Canadian cognitive psychologist and computer scientist, most noted for his work on artificial neural networks.
Since 2013 he divides his time working for Google (Google Brain) and the University of Toronto.
With David E. Rumelhart and Ronald J. Williams, Hinton was co-author of a highly-cited paper published in 1986 that popularized the backpropagation algorithm for training multi-layer neural networks.
He is viewed by some as a leading figure in the deep learning community and is referred to by some as the “Godfather of Deep Learning”.
The dramatic image-recognition milestone of the AlexNet designed by his student Alex Krizhevsky for the Imagenet challenge 2012 helped to revolutionize the field of computer vision.
Hinton was awarded the 2018 Turing Prize alongside Yoshua Bengio and Yann LeCun for their work on deep learning.
Geoffrey Everest Hinton is 72 years old as of 2019. He was born on 6 December 1947, in Wimbledon, London, United Kingdom
Hinton family includes the following people below:
Hinton is the great-great-grandson both of logician George Boole whose work eventually became one of the foundations of modern computer science, and of surgeon and author James Hinton.
His father is Howard Hinton. His middle name is from another relative, George Everest.
He is the nephew of the economist Colin Clark. He lost his first wife to ovarian cancer in 1994.
Hinton contributed one chapter to the 2018 book Architects of Intelligence: The Truth About AI from the People Building it by the American futurist Martin Ford.
Hinton was educated at King’s College, Cambridge graduating in 1970, with a Bachelor of Arts in experimental psychology.
He continued his study at the University of Edinburgh where he was awarded a PhD in artificial intelligencein 1978 for research supervised by Christopher Longuet-Higgins.
After his PhD he worked at the University of Sussex, and (after difficulty finding funding in Britain) the University of California, San Diego, and Carnegie Mellon University.
He was the founding director of the Gatsby Charitable Foundation Computational Neuroscience Unit at University College London, and is currently a professor in the computer science department at the University of Toronto.
He holds a Canada Research Chair in Machine Learning, and is currently an advisor for the Learning in Machines & Brains program at the Canadian Institute for Advanced Research.
Hinton taught a free online course on Neural Networks on the education platform Coursera in 2012. Hinton joined Google in March 2013 when his company, DNNresearch Inc., was acquired.
He is planning to “divide his time between his university research and his work at Google”.
Hinton’s research investigates ways of using neural networks for machine learning, memory, perception and symbol processing. He has authored or co-authored over 200 peer reviewed publications.
While he was a professor at Carnegie Mellon University (1982–1987), David E. Rumelhart and Hinton and Ronald J. Williams applied the backpropagation algorithm to multi-layer neural networks.
Their experiments showed that such networks can learn useful internal representations of data. Although this work was important in popularizing backpropagation, it was not the first to suggest the approach.
Reverse-mode automatic differentiation, of which backpropagation is a special case, was proposed by Seppo Linnainmaa in 1970, and Paul Werbos proposed to use it to train neural networks in 1974.
During the same period, he co-invented Boltzmann machines with David Ackley and Terry Sejnowski.
His other contributions to neural network research include distributed representations, time delay neural network, mixtures of experts, Helmholtz machines and Product of Experts.
In 2007 Hinton coauthored an unsupervised learning paper titled Unsupervised learning of image transformations. An accessible introduction to Geoffrey Hinton’s research can be found in his articles in Scientific American in September 1992 and October 1993.
In October and November 2017 respectively, he published two open access research papers on the theme of capsule neural networks, which according to Hinton are “finally something that works well.”
Notable former PhD students and postdoctoral researchers from his group include Richard Zemel, Brendan Frey, Radford M. Neal, Ruslan Salakhutdinov, Ilya Sutskever, Yann LeCun and Zoubin Ghahramani.
Geoffrey Hinton designs machine learning algorithms. He was one of the researchers who introduced the back-propagation algorithm that has been widely used for practical applications.
His other contributions to neural network research include Boltzmann machines, distributed representations, time-delay neural nets, mixtures of experts, variational learning, products of experts and deep belief nets.
He received his PhD in Artificial Intelligence from Edinburgh in 1978 and spent five years as a faculty member in Computer Science at Carnegie-Mellon.
He then moved to the Department of Computer Science at the University of Toronto where he directs the program on “Neural Computation and Adaptive Perception” for the Canadian Institute for Advanced Research.
Geoffrey Hinton is a fellow of the Royal Society, the Royal Society of Canada, and the Association for the Advancement of Artificial Intelligence.
He is an honorary foreign member of the American Academy of Arts and Sciences, and a former president of the Cognitive Science Society.
He has received honorary doctorates from the University of Edinburgh and the University of Sussex.
He was awarded the first David E. Rumelhart prize (2001), the IJCAI award for research excellence (2005), the IEEE Neural Network Pioneer award (1998), the Killam prize for Engineering (2012) and the NSERC Herzberg Gold Medal (2010) which is Canada’s top award in Science and Engineering.
While toiling away in the face of academic indifference, Hinton hit a more serious, private hurdle in the early ’90s when he became a single father.
Not long after he and his first wife adopted their babies, Ros died of ovarian cancer. Used to living in his head and at the lab, Hinton was thrown into the corporeal world of raising two small children.
His son has ADHD and other learning difficulties, and even with a nanny, Hinton had to be home at 6 p.m., managing support for his son and rushing to sales at the Gap for socks.
“I cannot imagine how a woman with children can have an academic career. I’m used to being able to spend my time just thinking about ideas.
Teaching is interesting but a bit of distraction, and the rest of life—I don’t have time for it,” Hinton says. “But with small kids, it’s just not on.” By “it” Hinton presumably means thinking—or life.
Still, work provided safe harbour from the realities at home. “I sometimes think I use the things to do with numbers and math as a defence against the emotional side of me,” Hinton says.
Parenting has forced a change. “It used to be when I went into the supermarket and the cashier couldn’t add up two numbers I’d think: ‘For god’s sake why can’t they hire a cashier who can do arithmetic?’ And now I think:
‘It’s really nice the supermarket would hire this person.’” He adds: “I didn’t want to be a better person, it just happened. It wasn’t one of my goals.”
In 1997, he remarried, to a British art historian, Jackie. Three years ago, she was diagnosed with pancreatic cancer, and now Hinton is, unfathomably, on the edge of losing a second wife.
In his life, Hinton has spent a lot of time in hospitals. He annoys staff by peppering them with questions.
He knows first-hand the patient’s frustrations of waiting for results and receiving vague information.
But unlike most people, he also knows that there will be, very soon, technology that can collapse a one-week wait for a test result to one day.
For a restrained Brit who usually leaves the AI proselytizing to others, Hinton is effusive about the potential of deep learning to revolutionize health care; the topic lights him up in a way that flying cars don’t.
“I see a lot of the inefficiencies in how medical professionals use data. There’s more information in a patient’s history than gets used. I see the fact that doctors really can’t read CT scans very well.
If you get two radiologists reading the same scan, they get two different readings.”
On three separate occasions, medical staff told his wife she had secondary tumours based on CT scan readings, and they were wrong each time. Hinton believes that AI will eventually put radiologists out of work—or at least eliminate the image-reading part of the job.
Recognition is the heart of AI, and also of successful diagnosis and treatment. “Ultimately, AI engineers will figure out how to train your immune system to attack cancer cells,” Hinton says.
One of Vector’s first projects, initiated by Hinton, will be connecting neural networks to the huge pools of data available at Toronto hospitals.
When Peter Munk recently donated $100 million to his eponymous cardiac-care centre, it was earmarked to turn the hospital into a world leader in digital cardiovascular health, and Vector will get some of those funds.
By accessing the massive data sets—essentially, patient archives—of an institute like the Munk Centre, AI tech could be used for a multitude of breakthroughs, including remotely monitoring a patient’s heartbeat and helping doctors pinpoint the ideal moment for discharge.
The Toronto start-up Deep Genomics, one of Vector’s partners, is developing AI that will be able to read DNA, which will help detect disease a generation early and determine the best treatment. Deep Genomics’ founder, Brendan Frey, was a student under Hinton.
After decades of sluggish pace, deep learning is moving fast, and Hinton seems to be caught in a Lorenzo’s Oil bind, pushing science forward urgently, attempting to outrun the clock ticking on the life of a loved one.
But pancreatic cancer is brutal and hard to diagnose in its early stages. “It may be too late for her, I’m afraid,” says Hinton, in his measured way.
Yoshua Bengio is a fellow deep learning pioneer based at the University of Montreal, one member of what’s been tagged in tech circles as “the Canadian AI mafia,” along with Hinton and Facebook’s Yann LeCun.
For decades, when Bengio has had work to do in Toronto, he stays at Hinton’s Annex house, taking long walks with him (Hinton walks everywhere, because his back doesn’t hurt when he’s upright, and vehicles require sitting).
He’s been watching Hinton’s rise to tech celebrity status with some wariness for his friend. “He’s not a god. He’s fallible. He’s just a human doing his human thing,” says Bengio.
“Sometimes he can see things with dark glasses. His personal life has not been easy for him. He has his darker times.”
In September, Hinton and his wife made it to their Muskoka cottage for a couple of days. It was beautiful at that time of year. “She’s both extremely brave and extremely sensible, so she just thinks she’s getting extra time, which she’s determined to make the best of,” he says.
Then he asks if I’ll do him a favour. “I would really like it if you would include in the story the idea that I’ve been able to continue doing my work for the past two and a half years because my wife has had such a positive attitude about her cancer,” he says calmly. “Thank you very much.”
The Vector Institute, Toronto’s answer to the AI brain drain, has a new-car smell, a name befitting a supervillain’s lair and a first-day-of-school vibe.
Canada’s newest research institute for artificial intelligence, located on the seventh floor in the MaRS complex at College and University, opened its doors late last fall.
Its space-age glass walls face the Romanesque solemnity of Queen’s Park and the University of Toronto, both of which are Vector partners.
With more than $100 million in combined provincial and federal funding, and $80 million from more than 30 private partners, including the big Canadian banks, Air Canada, Telus and Google, Vector is a public-private hybrid—mixing academia, public institutions and industry.
The 20 scientists who have so far been hired are already pursuing technological answers to some of the world’s biggest problems: how can AI be used to diagnose cancer in children and detect dementia in speech?
How can we build machines to help humans see as well as animals or compose beautiful music, or use quantum computing to speed up the analyzing of massive amounts of data humans are generating daily?
Raquel Urtasun, one of Vector’s key hires, will divide her time between Vector and Uber, where’s she developing self-driving cars.
Today’s frenzy around AI isn’t just about money, but also about the rapid pace of AI integration into everyday life.
The distance between a flip phone and an iPhone 10 with face recognition was less than 10 years, and many prominent scientists are wary that the technology is sprinting ahead of our ability to manage it.
Stephen Hawking, Elon Musk and Bill Gates have all warned against the dangers of unfettered AI. “I fear that AI may replace humans altogether,” Hawking said recently.
Hinton is aware of the ethical implications: he signed a petition to the UN calling for a ban on lethal autonomous weapons—otherwise known as killer robots—and refused a position on a board connected to the Communications Security Establishment because of concerns about the potential security abuses of AI.
He believes the government needs to step in and create regulations that prevent the military from exploiting the technology he’s spent his life perfecting—and specifically, he says, from developing robots that kill people.
For the most part, though, Hinton is sanguine about AI anxiety. “I think it’s going to make life a lot easier. The potential effects people talk about have nothing to do with the technology itself but have to do with how society is organized.
Being a socialist, I feel that when the technology comes along that increases productivity, everyone should share in those gains.”
Last summer, Hinton and I had lunch in the Google cafeteria downtown. The space has the daycare aesthetic of most digital companies, with bright colours, amoeba couches and an array of healthy lunch options being eaten by a lot of people under 30.
On the patios are a mini-putt course and a pollinator beehive. An espresso machine whirs loudly. It’s hard to imagine this is where the machine invasion might start, and yet….
“The apocalypse scenario where computers take over—that’s not something that could happen for a very long time,” says Hinton, standing and eating his quinoa and chicken.
“We’re a long, long way away from anything like that. It’s fine for philosophers to think about, but I’m not particularly interested in that issue because it’s not something I’m going to have to deal with in my lifetime.” Ever deadpan, it’s hard to tell if he’s joking.
But what about the ways in which this dependence on machines changes us? I tell him that whenever my phone prompts me with a suggested response (“Sounds good!” “See you there!”), I feel like I’m losing agency.
I become mechanized myself. Pop culture has been funnelling this exact apprehension since 2001: A Space Odyssey. In entertainment, machine progress is braided to a personal loneliness, a loss. It’s almost as if, by the machine becoming more human, we become less human.
Hinton listens and looks at me not unkindly, but with a trace of incredulity. “Do you feel less human when you use a pocket calculator?” he asks.
Around him, the Google millennials eat salad and drink their coffee, their key cards swinging from their hips. Almost all of them are on their phones, or holding their phones. “We’re machines,” says Hinton. “We’re just produced biologically.
Most people doing AI don’t have doubt that we’re machines. We’re just extremely fancy machines. And I shouldn’t say just. We’re special, wonderful machines.”
His net worth is unknown
Tweets by geoffreyhinton
He was elected a Fellow of the Royal Society (FRS) in 1998. He was the first winner of the Rumelhart Prize in 2001. His certificate of election for the Royal Society reads
In 2001, he was awarded an Honorary Doctorate from the University of Edinburgh.
He was the 2005 recipient of the IJCAI Award for Research Excellence lifetime-achievement award. He has also been awarded the 2011 Herzberg Canada Gold Medal for Science and Engineering. In 2013, Hinton was awarded an Honorary Doctorate from the Université de Sherbrooke.
In 2016, he was elected a foreign member of National Academy of Engineering “For contributions to the theory and practice of artificial neural networks and their application to speech recognition and computer vision”. He also received the 2016 IEEE/RSE Wolfson James Clerk Maxwell Award.
He has won the BBVA Foundation Frontiers of Knowledge Award (2016) in the Information and Communication Technologies category “for his pioneering and highly influential work” to endow machines with the ability to learn.
Together with Yann LeCun, and Yoshua Bengio, Hinton won the 2018 Turing Award for conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing.
Hinton moved from the U.S. to Canada in part due to disillusionment with Ronald Reagan-era politics and disapproval of military funding of artificial intelligence.
He has petitioned against lethal autonomous weapons. Regarding existential risk from artificial intelligence, he typically declines to make predictions more than five years into the future, noting that exponential progress makes the uncertainty too great.
However, in an informal conversation with the noted AI-risk alarmist Nick Bostrom in November 2015, overheard by journalist Raffi
Khatchadourian, he is reported to have stated that he did not expect general A.I. to be achieved for decades (“No sooner than 2070”), and that, in the context of a dichotomy earlier introduced by Bostrom between people who think managing existential risk from artificial intelligence is probably
hopeless versus easy enough that it will be solved automatically, Hinton “[is] in the camp that is hopeless.” He has stated, “I think political systems will use it to terrorize people” and has expressed his belief that agencies like the N.S.A. are already attempting to abuse similar technology.
Asked by Nick Bostrom why he continues research despite his grave concerns, he stated, “I could give you the usual arguments. But the truth is that the prospect of discovery is too sweet.”
– a reference to a remark by J. Robert Oppenheimer when questioned about why he had proceeded with his research in the Manhattan Project.
According to the same report, Hinton does not categorically rule out human beings controlling an artificial superintelligence, but warns that “there is not a good track record of less intelligent things controlling things of greater intelligence”.
ncG1vNJzZmivp6x7u7PRZ6WerF%2Bau3DDyKSgaJ%2BVpLOnvsSyZKGhnqm8r3nBoqain6KWvanFjJqenmWWlrqquNhmp6GnpKR6uLXFnmSbp5%2BgwG6vzq6prJ2ilnqkrdGenKtlnprBbsPOq6uhZZGjsW66xLCqaA%3D%3D